1272 Bond Street, Naperville, IL 60563 630-505-7500
Cloud Compute

Cato Launches Instant Access: The First SASE-Based Clientless Access Service to Enable Enterprises to Support Work-From-Home at Scale

Cato Networks, provider of the world’s first SASE platform, introduced today Cato SDP with Instant Access to help IT leaders rapidly deliver work-from-home solutions at scale worldwide. Instant Access adds a new clientless access option and application portal to Cato SDP, the first software-defined perimeter (SDP) solution to leverage a true secure access service edge (SASE) architecture, delivering shorter rollout times, unlimited scalability, continuous threat prevention, and optimized performance worldwide.

“With the global health crisis, enterprises are looking to deploy work-from-home capabilities at scale. Cato has seen remote access adoption more than double since the outbreak of COVID-19. The enhancements to Cato SDP will further help IT leaders to quickly deliver secure remote access at scale to their employees across the globe,” says Shlomo Kramer, CEO and co-founder of Cato Networks.”

Cato SDP With Instant Access Delivers Optimized Remote Access Worldwide in Minutes

As work-from-home becomes the norm, remote access has become an even more critical part of IT infrastructure. Legacy VPN servers suffer from scalability limitations, which impact the expansion of work-from-home access to all employees, and performance problems for distant remote users. VPNs also introduces security risks as malicious users are a mere password away from sensitive business-critical resources.

Cato SDP addresses those challenges. With Instant Access, users can only access authorized applications. They simply click a URL, authenticate once through single sign-on (SSO), and gain access to their portal of authorized applications. For those requiring full access to both Web and legacy applications, Cato continues to offer its Cato Client as part of Cato SDP.

Cato SDP


With Instant Access, Cato SDP makes securely accessing applications remotely easy

Cato SDP Leverages the Power of SASE to Transform Remote Access

By leveraging Cato’s global SASE platform, Cato SDP with Instant Access solves the critical scaling, performance, security, and management limitations that have hampered legacy mobile access solutions. Specifically, Cato SDP delivers:

Rapid Deployment

Cato SDP deploys instantly, requiring no additional software on the mobile device, or SDP connector software or SDP gateway hardware in the datacenter. As the enterprise network, Cato already controls application flows, allowing Cato customers to publish applications with just a few clicks at their Cato management consoles.

Unlimited Scalability

Cato’s SASE cloud-native and globally distributed architecture supports an unlimited number of users across the globe. Users can easily move from the office to their homes, or work on the road, with their access being consistently secure and always optimized.

Optimal Global Performance

Cato SDP sends remote traffic across Cato’s optimized, global private backbone not the unpredictable public Internet. Remote users are first-class citizens on the corporate network.

Secure Access

Multi-factor authentication is part of the SASE platform and is provided with Cato SDP. What’s more restricting access to approved applications and eliminating network credentials simplifies not only the user experience but also removes the risk of attackers or advanced malware accessing unauthorized network resources.

Continuous Threat Prevention

Cato’s cloud-based network security stack continuously protects remote workers against network-based threats. Cato’s security stack includes NGFW, SWG, IPS, advanced anti-malware, and Managed Threat Detection and Response (MDR) service.

Single-Pane-of-Glass Management

Cato SDP is configured, maintained, and managed through the same portal as the rest of Cato’s networking and security services making configuration and management very simple.

Cato management console


The Cato management console is a single-pane-of-glass for managing remote access and the rest of the enterprise network.

Enterprises Rely on Cato SDP for Remote Access During Global Health Crisis

Many are already benefiting from the power of Cato SDP. Here’s what several enterprises had to say:

ASM Assembly Systems

“Cato has helped us respond to the COVID-19 outbreak significantly faster than would otherwise have been possible. We had been using a firewall as our VPN server but when our users shifted to working from home, we saw the CPU load jump to 79% as concurrent VPN usage more than tripled. We expect to hit over 90% when our VPN usage quintuples by end of week,” says Ian Bleazard, IT Director of Infrastructure and Analytics in the SMT segment of ASM Assembly Systems, a leading global supplier to the electronics business.

“With Cato, we can equip all employees with a very scalable remote solution and instead of connecting to a VPN server, they can just connect straight into the Cato Cloud and be able to source all our global applications.  We are also able to issue those licenses and manage the remote users from the same dashboard we use for our global offices. Having one console for everything makes the whole management process much simpler, and very much helped us stay on top of these unique circumstances.”

Geosyntec Consultants

“Our company is dispersed across the globe with over 80 office locations, many of them are on the Cato network. We utilize a few different VPN technologies. With the COVID-19 pandemic on the rise, many of our users began to work remotely. Our VPN traffic spiked, in some cases hitting the limits of our VPN servers,” says Edo Nakdimon, Senior IT Manager, at Geosyntec Consultants, an environmental engineering firm.

“Instead of purchasing more VPN server licenses, we equipped remote users with Cato access. In a matter of 30 minutes we configured the Cato mobile solution with single-sign-on (SSO) based on our Azure AD.  Cato provided us a scalable remote access solution that extends our QoS  and network policies in our SD-WAN to our remote users and reduced the network overhead and bottlenecks for remote users as they connected directly to Cato, eliminating unnecessary hops across the public Internet core. The easily deployed SSO and web filtering integration provided us additional layers of security for our VPN users. The Cato mobile access solution is simple to deploy, yet robust. It improved our employees’ ability to securely and productively work remotely.

Westmoreland Mining

“We found ourselves having to rapidly increase our capacity to support a larger than normal remote workforce and successfully rolled out 150+ Cato VPN clients within 24 hours. It was a huge success,” says Kent Wade, Director of IT and Cybersecurity at Westmoreland Mining LLC, a coal supplier.




0

Cloud Compute

A Modern VPN Alternative to Deploy Now

Work from anywhere has recently become a hot topic. The corona virus outbreak has forced many organizations to move some or all of their employees to work from home. In some cases, work from home was a way to reduce possible exposure, in others it was mandated by health authorities to prevent the spread of the disease across communities.

This unforeseen set of events caught many organizations off guard. Historically, only a subset of the workforce required remote access, including executives, field sales, field service, and other knowledge workers. Now, enterprises need to maintain business continuity by enabling the entire workforce to work remotely.

The most common enterprise remote access technology is Virtual Private Networking (VPN). How does it work? A VPN client is installed on the users’ devices – laptops, smartphones, tablets – to connect over the Internet to a server in the headquarters. Once connected to the server, users gain access to the corporate network and from there to the applications they need for their work.

The obvious choice for enterprises to address the work-from-anywhere requirement was to extend their VPN technology to all users. However, VPNs were built to enable short duration connectivity for a small subset of the users. For example, a salesperson looking to update the CRM system at the end of the day on the road. VPNs may not be the right choice to support continuous remote access for all employees.

VPN is incompatible with company-wide work from anywhere requirements

VPN technology has many shortcomings. The most relevant ones for large scale remote access deployments are scalabilityavailability, and performance.

VPN was never meant to scale to continuously connect an entire organization to critical applications. Under a broad work-from-anywhere scenario, VPN servers will come under extreme load that will impact response time and user productivity. To avert this problem, additional VPN servers or VPN concentrators, would have to be deployed in different geographical regions.

Next, each component in the VPN architecture has to be configured for high availability. This increases cost and complexity. The project itself is non-trivial and may take a while to deploy, especially in affected regions.

Finally, VPN is using the unpredictable public Internet, which isn’t optimized for global access. This is in contrast to the benefits of premium connectivity, such as MPLS or SD-WAN, available in corporate offices.

SASE: A VPN alternative for continuous work from anywhere by everyone

In mid-2019, Gartner introduced a new cloud-native architectural framework to deliver secure global connectivity to all locations and users. It was named the Secure Access Service Edge (or SASE). Because SASE is built as the core network and security infrastructure of the business, and not just as a remote access solution, it offers unprecedented levels of scalability, availability, and performance to all enterprise resources.

What makes SASE an ideal VPN alternative? In short, SASE offers the scalable access, optimized connectivity, and integrated threat prevention, needed to support continuous large-scale remote access.

First, the SASE service seamlessly scales to support any number of end users globally. There is no need to set up regional hubs or VPN concentrators. The SASE service is built on top of dozens of globally distributed Points of Presence (PoPs) to deliver a wide range of security and networking services, including remote access, close to all locations and users.

Second, availability is inherently designed into the SASE service. Each resource, a location, a user, or a cloud, establishes a tunnel to the neatest SASE PoP. Each PoP is built from multiple redundant compute nodes for local resiliency, and multiple regional PoPs dynamically back up one another. The SASE tunnel management system automatically seeks an available PoP to deliver continuous service, so the customer doesn’t have to worry about high availability design and redundancy planning.

Third, SASE PoPs are interconnected with a private backbone and closely peer with cloud providers, to ensure optimal routing from each edge to each application. This is in contrast with the use of the public Internet to connect to users to the corporate network.

Lastly, since all traffic passes through a full network security stack built into the SASE service, multi-factor authentication, full access control, and threat prevention are applied. Because the SASE service is globally distributed, SASE avoids the trombone effect associated with forcing traffic to specific security choke points on the network. All processing is done within the PoP closest to the users while enforcing all corporate network and security policies.

A SASE Service you can deploy TODAY

If you are looking to quickly deploy a work-from-anywhere solution in your business, consider a SASE service. Cato was designed from the ground up as a SASE service that is now used by hundreds of organizations to support thousands of locations, and tens of thousands of mobile users.

Cato is built to provide the scalability, availability, performance, and security you need for everyone at every location. Furthermore, Cato’s cloud native and software-centric architecture enable you to connect your cloud and on-premises datacenters to Cato in a matter of minutes and offer a self-service client provisioning for your employees on any device.




0

Cloud Compute
With Desktop as a Service your local data moves to the cloud. Instead of working with critical data and applications locally, your employees use devices to access a remote virtualized cloud desktop running in a data center.

Learn More About DaaS

Contact ATI to setup a no obligation DaaS consultation
0

Cloud Compute
Article by ATI partner US Signal 

There’s plenty of information out there about the benefits of cloud-based disaster recovery (DR) and backup. You’ve also likely read a lot about how to overcome the challenges associated with cloud-based DR and backup. There are even numerous checklists for finding a cloud-based DR or backup provider. But what you really want to know is: how do I get started?

As is the case with a lot of questions regarding cloud services, the answer is: it depends. All companies are different. The nature of their businesses vary. Their operations are unique, and their business requirements and needs are usually specific based on their industry, market sector, stakeholders and other variables.

Cliché as it sounds, there really is no “one-size-fits-all” approach to cloud-based DR and backup. However, there are some basic guidelines to help you move your organization to a cloud-based DR and/or backup model. Among them:

  1. Inventory your data and applications. What do you have? Where is it? Who needs it and how often? (You can’t do Step #3 without this information.)  
  2. Identify your mission-critical infrastructure. There is always mission-critical equipment required to keep core business operations up and running.
  3. Determine the effects on your organization if you couldn’t access the various types of data and applications you have, as well as your IT infrastructure. This will help you determine if some are more important than others.
  4. Develop recovery point objectives (RPOs)and recovery time objectives (RTOs). Check to see if there are any regulatory requirements, government mandates or industry standards you must comply with in terms of your RPOs and RTOs.
  5. Create a recovery event task list. What do you need first, second and so on, and who’s responsible for getting these tasks done?
  6. Document how you currently handle DR and backup. Are you employing industry best practices? Are you accounting for all your data, applications and IT infrastructure? Are these tactics meeting your RPO and RTO requirements? Have you tested these tactics to make sure they work the way you think they should work? Are you confident that if a manmade or natural disaster struck, your company could continue doing business or at least mitigate issues enough so you could be back online quickly without disrupting your business operations?
  7. If there are deficiencies in what you’re currently doing, or you don’t have any kind of DR or backup plan in place, determine if you have the in-house expertise and available resources to get a cloud-based solution in place. If you do, get on it. If not, seek out a service provider that can help.
  8. Whether you’re going the “do-it-yourself” route or working with a service provider, first determine what you need in a cloud-based DR and backup solution. List out your “must-have’s” and “nice-to-have’s.” Some of the things to consider when creating your list:
    • Do you have both mission-critical and critical data and applications that might require different levels of protection and backup such that you’d benefit from a ‘tiered approach’?
    • How will your data be securely transferred and stored in the cloud?
    • Will data be encrypted in transit and at rest, and who will hold the data encryption keys?
    • How will users be authenticated? Is multi-factor authentication included?
    • Will the solution meet compliance mandates?
    • How much bandwidth, compute and storage will be needed?
    • How quickly will data need to be transferred to the cloud?
    • Will the service be managed by a provider?
    • Look back at #4. What are your compliance, RTO and RPO requirements?
    • Will you need help with data migration and/or solution testing?
  9. Carefully assess the advantages and disadvantages of the various cloud-based DR and backup options under consideration. Do any of them fully meet your needs and requirements? Can they be customized for a better “fit”? Are there any tradeoffs that may overshadow the benefits?
  10. If you’re going with a service provider, will that company back its DR and backup solutions with a service level agreement? Does it have around-the-clock tech support available if you need it?  Does its solution protect you against ransomware and other security threats as well as ensure your data can be successfully backed up and recovered? 

The Case for Managed DR and Backup

One of the easiest ways to move DR and backup to the cloud is to work with a trusted service provider. Working with the right service provider can:

  • Free up your internal resources
  • Reduce capital expenses
  • Help you meet many of your compliance requirements (provided the provider offers a compliant DR and backup solution)
  • Let you take advantage of leading-edge data protection and best practices (because service providers have to invest in the best to keep their customers happy)
  • And more!

Article by ATI partner US Signal 



Sign up for a no obligation Disaster Recovery assessment.

0

Cloud Compute

Article by ATI Partner Josh Williams, VP of Solution Engineering – INAP 

 

Yankees or Red Sox, Linux or Windows, Star Wars or Star Trek: There’s no shortage of choices life asks us to make. When it comes to cloud versus colocation, it may be tempting to see it as just another either-or decision. But the question you should be asking isn’t “colo or cloud”—it’s “what’s the right mix for my applications?”

 

Colo is sometimes forgotten because of its more popular, younger and shinier cousin the cloud, but there are use cases for both, and your particular mix will depend on your applications. For example, a financial services company that wants to leverage cloud to gain cost efficiency might use a public cloud for its end-of-day or end-of-month batch processing, while also using colocation or hosted private cloud for its mission-critical databases and supporting applications. This configuration would provide the cost efficiency of public cloud for short-term workloads while also utilizing a dedicated, secure platform optimized for applications that are always on.

 

Regardless of your situation, developing a comprehensive cloud strategy will help you avoid lock-in, providing flexibility, adaptability and room to grow as your needs evolve. And that multi-cloud strategy just might include some smart usage of colocation if, for example, you have a need for specific hardware or want a network presence in certain locations. Here’s a primer for understanding the big pieces of cloud, colo and anything in between.

 

The Hidden Cost of On-Premise Solutions

 

For any organization facing the decision to “build” or “buy” their infrastructure, “buying”—whether bringing your hardware and renting space in a colocation facility or shifting entirely to the cloud—is a simple step that is guaranteed to level up your IT. Yet the conversation about colo and cloud is usually focused on dollars spent and saved. This is understandable, especially since on-premise data centers are often expensive to secure and maintain, and going off-premise can have a clear impact on cost savings. But what could the conversation be if CAPEX or OPEX weren’t the primary drivers of your IT infrastructure decisions?

 

Now don’t get me wrong—I know keeping costs reasonable is important—but I also think it might be helpful to think about your choice in terms of a different resource: time. The math is simple: If you can offload certain tasks to a service provider, that’s time you get back. Every minute not spent handling maintenance and administration is a minute you now have free to focus on your actual applications. With that being said, here are the ways colo and cloud can make your life better.

 

Security and Compliance

 

With a colo or cloud service provider, all the work of physical data center security and maintenance is no longer part of your to-do list—and a lot of compliance too, depending on your provider. With a managed service provider, they can take care of your routine data security and compliance tasks or even help you architect your infrastructure to fit the specific compliance needs of your applications.

 

Connectivity

 

A big part of the decision to move off-premise may be a simple need for connectivity. Your on-premise solution might lack certain connectivity altogether or you may have trouble with reliability or latency. Colocation can solve these issues, whether you need to connect to certain geographies, carriers or third-party clouds like AWS or Azure. Managed services from your provider can give you an edge here too, ensuring dependable connectivity and minimizing latency even in spread-out networks.

 

Backup and Disaster Recovery

 

A huge upside to partnering with a comprehensive service provider is that regardless of your infrastructure solution, backup and DR services can be easily implemented. Whether using a colocation facility or a hosted private cloud, both are effective, efficient ways to build redundancy into your systems—without having to build and operate your own second site.

 

The Biggest Difference-Maker: A Trusted Service Provider

 

When choosing the right mix, it’s a good idea to start by asking a few questions:

 

  • Where do you see your IT infrastructure and operations strategy in three to five years?
  • What do you predict your service needs will be then?
  • And most importantly: Are you working with a provider that gives you the capability to do the things you need to do today and won’t hinder you from doing what you need to do in the future?

Choosing the right provider can determine whether you have the flexibility and freedom to meet your future needs. They can be an invaluable partner in helping you to rightsize for today without limiting your options for the future. So pick one with a wide range of infrastructure solutions and managed services and one that is skilled, knowledgeable and experienced in multiple competencies, whether colo or cloud. At INAP, my team of solutions engineers help customers navigate the process, identify hard-to-spot downsides and share knowledge based on our experience assisting other customers.

 

Applications that are not a good fit for a legacy infrastructure model can be easily migrated with the help of a service provider like INAP, while maintaining a single partner that knows you and your business. The right solution will depend on your applications, and that will inevitably evolve over time. Rather than pitting colo against cloud, start from what your applications require, then find the right mix that makes sense for you.


Article by ATI Partner Josh Williams, VP of Solution Engineering – INAP



No obligation DRaaS, Backup & Cloud Consultation – Sign Up Here

0

Cloud Compute

Whats your disaster recovery or data backup plan? ATI can help determine:

 

  • Necessary RTO and RPO times
  • Zerto? Veeam? Nimble? Double-Take?
  • Co-Loc or georedundant data centers?
  • Best practices for data backup
Schedule a DRaaS Analysis today.

draas_anaylsis

ATI works closely with the leader in cloud solutions, to provide a cost effective, dependable set of products to protect the modern business. Our cloud-based Disaster Recovery services provides customers with data loss prevention and various business continuity options leveraging the best-of-breed IT infrastructure as a service (IaaS).



Contact us today to learn more, or click here to learn more about Disaster Recovery as a Service.
0

Cloud Compute
AWS vs. Azure: Is There a Difference?

Article by ATI partner Danielle Hagel, Senior Manager of CoreSite’s Customer Engagement Program

Are there differences in cloud services, or is IaaS evolving into a commoditized technology? If you are considering making changes to your cloud strategy or initiating a private, public or hybrid cloud deployment, that’s an important question.

 

Naturally, you will look to the leading providers for an answer. For the past six years, Gartner has placed two companies in the “leaders” box of their Magic Quadrant for Cloud Infrastructure as a Service, Worldwide: Amazon Web Services and Microsoft1. Let’s put the scale of these two companies in perspective. In 2015, Gartner reported that AWS customers deployed 10X more infrastructure than the combined adoption of the next 14 providers2. Today, although Microsoft lags behind Amazon in overall use, “Azure adoption has increased significantly from 26 percent to 43 percent, reducing the AWS lead among enterprises,” according to RightScale’s 2017 State of the Cloud Report3. Looking ahead, you can expect both companies to make sharing workloads between on-premises and off-premises clouds more attractive.

 

AWS Services, Azure and Enterprise

 

If you look back only a few years, you will find that “bring IT to the forefront of the enterprise” was a common rallying cry. Now, IT is not only at the forefront, every significant decision for the enterprise includes the CTO’s input.

 

AWS and Azure understand this shift, and consequently the array of services they sell satisfy the business needs of their customers. A detailed comparison of AWS and Azure would reveal some differences in each provider’s service offerings. Some are subtle, such as SLA uptime guarantees or published certifications. The salient point is that the services offered address core enterprise requirements; they are business-driven, and will continue to change according to what enterprises need, not according to the cool new things that can be done with more computing power and faster networks.

 

Keep in mind that other players could offer services that suit your company or industry. Google Cloud Platform and SoftLayer are two examples and each differentiates themselves with niche IaaS services: Google’s services include machine learning and natural language APIs; SoftLayer offers a 100% SLA.

 

Again, we are not recommending either of these companies, but are illustrating the fact that the cloud is a dynamic technology with many possibilities. Come to think of it, that doesn’t sound like commoditization at all.

 

Article by Danielle Hagel, Senior Manager of CoreSite’s Customer Engagement Program

  1. aws.amazon.com/resources/gartner-2016-mq-learn-more
    2. www.gartner.com/doc/reprints?id=1-2G2O5FC&ct=150519&st=sb
    3. www.rightscale.com/lp/state-of-the-cloud
0

Cloud Compute, From ATI
Considering the cloud? We are your …As A Service technology experts. Schedule a Cloud Consultation today.



There are hundreds of cloud solutions… So how do you choose the right one? We’ve met, vetted, toured data centers & learned the strengths & weaknesses of all the major providers… So, you don’t have to. Cloud Migration? Public or Private? Security & Compliance? Schedule a Cloud Consultation w ATI and we’ll walk you through each.
0