April 22, 2013By Christopher Hertz
Thursday, October 25, 2001 seems like just yesterday. In the District of Columbia it was unseasonably warm, with a high of 78 degrees Fahrenheit and in Redmond they were releasing Windows XP. Fast forward eleven and a half years and we are less than a year from Tuesday, April 8, 2014. On this day, companies and individuals still using Windows XP will be running “unsupported software.” Windows XP lifecycle will have come to an end, and security patches will stop arriving. This doesn’t mean that the operating will stop running in your computing environment, but continuing to use Windows XP will expose you to greater security risks.
New Signature has been working with customers for several years to move them from Windows XP to Windows 7 (and now in some cases to Windows 8). For larger organizations, this process including planning, asset inventory, application analysis, application testing, application remediation, data migration, new operating system deployment, and training can take between six and 18 months. We recommend that any company still running Windows XP begin planning a migration to a modern desktop, and research and consulting firm Gartner Inc. advised its clients to move off of Windows XP this year.
For some firms, Windows XP “end-of-life” may have a very real impact on their business. For example, firms who must comply with the PCI Data Security Standard (PCI DSS) who are still running Windows XP on April 9, 2014 will no longer be PCI DSS compliant. This is because doing so will run afoul of PCI DSS Requirement 6.1, which states that you must, “Ensure that all system components and software are protected from known vulnerabilities by having the latest vendor-supplied security patches installed. Install critical security patches within one month of release.” In addition, and aside from PCI DSS compliance issues, all companies with Windows XP put themselves squarely in every hacker’s bulls-eye. Those with malicious intent scan company networks for weaknesses, and an end-of-life operating system like Windows XP may be an open invitation and result in a costly data breach–particularly if new vulnerabilities are discovered after April 8, 2014.
The good news is that for a small or medium-sized organization ready to migrate to Windows 8 and Office 2013 (Office 2003 is also nearing end-of-life), Microsoft rolled out their Get2Modern program. Through Get2Modern campaign you can upgrade your existing Windows XP Professional machines to Windows 8 Pro and Office Standard 2013 at a 15% discount through June 30. Each customer can purchase up to 100 licenses at the promotional value–100 licenses of Windows 8 Pro and 100 licenses of Office Standard. The Get2Modern deal is offered through Microsoft partners, including New Signature, and is for organizations with no more than 249 seats. If you are interested, you will need to purchase Windows 8 Pro and Office Standard 2013 together under Microsoft’s Open license program to get the discount.
April 19, 2013By Jessie Collins
Many years ago, I studied economics. Indulge me for a moment so I may make a point.
Let’s imagine your web site has a bunch of pages (we’re going to make this really scientific) and that you spend an equal amount of time drafting, editing and posting each page. Your marginal cost – the effort you put into each page – could be represented like this:
Now let’s look at your imaginary web stats over the course of, say, 1 year. You will likely find that your traffic is not evenly distributed over those pages. In fact, I’d venture to guess that at least 80% of your traffic, if not more, is devoted to the top 20% of pages. That means the total benefit derived from those pages – if we equate benefit to page views – could be represented like this:
Notice how the curve flattens out at the top? That’s because most pages are viewed only a handful of times, adding very little value to your bottom line. If we look at the amount of value each individual page contributes, you’d be looking at something like this:
The first few pages add lots of value. But the marginal value of subsequent pages decreases. There comes a point where the added value of these pages is less than what it cost you to put it out there in the first place. So, as someone who is charged with maintaining a site and maximizing the benefits received from it, you need to do some analysis and be brutally honest about what is worth keeping, and what needs to go. Continuing to maintain pages that just aren’t delivering value is a waste of your limited resources.
Think of your web site as a portfolio. You want to invest in strong performers and drop the duds. By trimming down your page count you can streamline the site’s architecture, making sure users can find what’s most important to them. You can also focus your limited content resources on the most exposed areas of the site. Reducing your overall footprint reduces the overall amount of maintenance your site requires.
There are Always Exceptions
If the graphs didn’t scare you away, you might be one of those people who would say, what about the long tail? It’s true, taken as a whole, the total value of all the pages sitting in the long tail of your marginal benefit graph may be the same as (or greater than) the value produced by your few superstar pages. But the cost for maintaining it all is very high. On the other hand, it would be a shame to discount the entire long tail in favor of only the “popular” content. I concede that you may have a few gems in that tail that are worth saving. Some less-popular content items may have more value than the number of views alone could imply. For example:
- Unique content that isn’t available anywhere else (Are you the only doctor studying that disorder? Are you the only one selling that product? )
- Archives and record systems whose sole purpose is to provide a catalog of content (Are you responsible for a system of record?)
- Fascinating and relevant copy that is obscured by poor architecture or a bad user experience (Should that navigation really go 6 levels deep? Does it have to be presented using Flash?)
Still, these are likely the exception rather than the rule. Let’s be honest. Your long tail is most likely filled with junk.
What to Do?
Deleting large tracts of a website can be a political mine field. Finding the needles in the haystack worth keeping won’t be easy either. But, by arming yourself with the proper data and support, you can succeed.
- Conduct a content inventory so you know exactly what is on the site.
- Audit the inventory using your site analytics to determine what pages are actually visited. While you’re at it, take a look at how long ago the page was created and when it was last modified. If you’re lucky, your CMS will be able to give you this data in a snap and it’s illuminating. I suspect you’ll find that a lot of content was created a long time ago and hasn’t been reviewed or updated in years. That’s very likely the sort of stuff that needs to go.
- Sort, color code or otherwise annotate your inventory so you can pull out those super performers, identify the ROT and then look at your long tail content for hidden treasures.
- Share this information with stakeholders in the most visual way possible – colorful charts and simple ROI tables tend to work very well. It’s not that they aren’t bright – you just need to get their attention and speak their language. It can also help to compare the content that belongs one stakeholder with the entire universe of content on the site. Give your stakeholders something to compare themselves to and a way to understand the figures you are sharing.
- Have an open and honest discussion with stakeholders about their content and what needs to stay, go or be rearranged.
If this sounds too daunting, or you could use some additional support, New Signature is happy to help.
By Ralph Kyttle
Because System Center Operations Manager (SCOM) provides many different monitors and rules, it would be very difficult to keep track of the various alerts that you could wish to be notified on, and enable notification for these rules and monitors one by one. Also, it could be possible that SCOM is monitoring key items of your infrastructure using some monitors that you are not even aware of, and without knowledge of these monitors, you would have no way of knowing how to configure notifications for these alerts upon on their detection.
In this example, we are showcasing a solution for a data center that is managed by a central IT department, and that IT department must receive notifications for all servers and services in the environment. We setup our criteria to notify on all alerts that come into the SCOM console that meet the following attributes:
Severity Warning or Critical Priority Medium or High Resolution State New
This ensures that we do not need to have knowledge of every single monitor or rule that SCOM is capable of alerting on within our environment, and allows us to receive notifications on all important events as detected by SCOM.
The thing to keep in mind with this approach to notification is that some tweaking will be required to get you to the end goal that you desire. While we are telling SCOM to notify us only on new events that are deemed of a severity of either warning or critical with a priority of either medium or high, it is very possible that we will get notified on some alerts that do not have a direct action for us to take, or that we would just rather not get notified on. We can resolve these issues by applying overrides to the monitors and rules within SCOM.
Overrides are the way we change the default behavior of an object, such as a rule or monitor within SCOM. Rules and monitors are configured with some default settings, which are defined by the management pack author. We have the ability to override many settings, for example we can adjust the thresholds for monitors. If disk space monitoring was alerting us when a drive had less than 10% free space left on it, and we wanted to be notified sooner, say when the disk had less than 20% free space, we could set an override to change the default behavior that was defined in the management pack by the management pack author.
We can also use overrides to change the severity and priority levels of a monitor or rule. Using the logic we have defined above, we can use overrides to essentially turn off notification for certain alerts by adjusting the severity and priority levels.
We should think about SCOM notifications in two ways:
- Is the alert actionable?
- Is this alert something that I would like to be able to view or track via the SCOM console?
Is the alert actionable?
All alerts that are sent via notifications should be actionable by an administrator. One of the key issues that can come up with any alerting system is when too many notifications are sent that don’t contain actionable information. This can lead to an issue where administrators are not responding to alerts, because they have become accustomed to the fact that many of the alerts are not providing actionable information. An administrator or team that has developed this mindset can end up missing important alerts. If you receive a notification from SCOM and realize that there is no action required on your part, or if after performing research on the alert you feel that in your environment you can safely ignore the alert, it may be best to set an override to prevent notification for this alert from occurring again.
Based on the logic we defined earlier, this can be accomplished in multiple ways, depending on your desired outcome. Because we are notifying on alerts that have a Warning or Critical severity, we can set an override on a rule or monitor to change its severity to Informational. This will prevent additional notifications for the alert from being sent out, and will move the alert down to the informational alerts section of the alerts view.
We can also approach this by setting an override on the rule or monitor’s priority. Because we are notifying on alerts that have a Medium or High priority, we can set an override on a rule or monitor to change its priority to Low. This will prevent additional notifications for the alert from being sent out, but will keep its severity to its original setting. This could be useful if you wanted to prevent notifications for an alert from being sent out, but still wanted to have the alert show up in the SCOM console with its default severity level of either Warning or Critical.
These decisions are all based on your desired outcome. Overrides can be placed on a single object, a group of objects, or the entire class of objects. So you can choose to change settings for a single machine, a group of machines or all machines. This flexibility is important, because say there was an alert that was generated, and you wanted to disable notifications but only for one server. All other servers should continue to send notifications for that alert. Using overrides, we can target desired objects for modification to obtain our desired behavior.
You can also customize notifications for their recipients, for example, to enable certain alerts to send text message and email, while others just send email. In addition, you can set certain notification strategies based on the time of day, and even delay notifications from being sent out for a configured period of time to prevent excessive notifications from being sent out for issues that seem to resolve themselves in a short time frame, such as an alert on CPU utilization, which may be a temporary condition that resolves itself automatically upon the completion of a process on a computer.
As we noted before, you can set overrides on an alert or monitor to adjust its severity or priority. While this is useful for disabling or changing notifications to create your desired notification state, it can also be used to increase the amount of notification you receive for a particular alert. For example, say you were sending text message and email notification on Critical alerts, and only email notification for Warning alerts. Using overrides, you could also apply settings to move a monitor or rule from Warning to Critical or vice versa, depending on your desired notification behavior. This could allow you to reach a point in your environment where not only are you filtering alerts to decide which alerts send out notifications, but also the type of notification that is sent out. Say you went with this setup, and you received a text message for an alert. While it is important for the alert to be actioned, you decide that text message notification is too intense for this particular alert. If you configure an override on the rule or monitor and drop the severity to Warning, you will still receive an email notification when the alert is triggered, but will no longer receive a text message. The same can be said in the reverse, to set an override on a rule or monitor that by default is set to Warning, and increase the severity to Critical to ensure that you receive increased notification for that alert.
The previous example could be very useful for a team that performs after-hours support of their environment, which is most teams these days. In some environments, setting a cell phone to ring on every new email could keep the on call administrator up all night. However, with the ability to deeply customize alert settings via overrides, we can ensure that only the most critical alerts are sent via text message, and the on call administrator can set their phone to ring for each text message as opposed to each email, allowing them to be responsive to important issues.
Is this alert something that I would like to be able to view or track via the SCOM console?
In the section above, we discussed how to change our notification settings via overrides. Some of these tweaks changed how the alert would be presented to an administrator in the SCOM console. These options are useful if you would like to change notification behavior, but still have the ability to track the occurrences of an alert. What if you don’t care about the alert all together? There could be a situation where SCOM generates an alert, and after some research you find that you would not like to receive any notification for the alert, and you would not even care for it to show up in the console.
In this case, we would look to disable that rule or monitor. Disabling the rule or monitor will both prevent notifications and prevent it from showing up with alerts in the SCOM console. As we mentioned before with overrides, disable rules are similar in that they can disable a monitor or rule for a single object, a group of objects, or all of the objects of the particular class that is affected by the rule or monitor.
In conclusion, there are multiple ways of handling your alert and notification strategy with SCOM. The first thing to think about is, “Is this alert actionable”. If it is, ensure that the alert is sending a notification. If it is not actionable, configure overrides to change the default behavior of the alert to either disable notification, adjust notification options, or disable the alert all together if you are not going to need notification or tracking of the alert. Through this continued tweaking of your environment, you will start to see the number of unwanted notifications and alerts decrease, and you will mold SCOM into your environment to provide actionable alerts on important events based on the management packs you have installed in your environment.
April 18, 2013By Ralph Kyttle
System Center Configuration Manager 2012 SP1 (SCCM) introduces many new important and valuable updates, one of which is the ability to manage Apple OS X computers. New to SP1 is the ability to deploy a SCCM agent onto Apple OS X computers that use an Intel 64-bit chipset, running Mac OS X 10.6, 10.7, and 10.8.
While not all features of SCCM are available yet for OS X devices, Microsoft has enabled some key features to get the ball rolling. At this time, these features include:
- Computer discovery
- Hardware inventory
- Software inventory
- Application deployment
- Configuration deployment and compliance
This feature set is a good starting point for managing OS X computers, but it is worth noting that there are some things to be aware of and plan for if you are looking to introduce OS X computers into your SCCM management infrastructure.
The first item to note is that client installation and management for Apple OS X computers in System Center Configuration Manager 2012 SP1 requires public key infrastructure (PKI) certificates. These certificates must be issued by a Microsoft Certificate Authority, so if you do not currently have a PKI solution deployed within your environment, this would be your first step towards enabling Mac management within SCCM.
Secondly, at this time there is no push install mechanism available for the Apple OS X client, so all OS X computers which will be managed by SCCM will require a manual install of the SCCM agent built for OS X. Because the OS X SCCM agent relies on a user certificate for authentication to a management point or distribution point, the end user will have to be present during the client install, because they will be prompted to specify domain credentials during the enrollment task. For more information on how to install clients on OS X computers in Configuration Manager, see the following TechNet article: How to Install Clients on Mac Computers in Configuration Manager.
In addition to the considerations that must be made to support the client rollout, it is important to be aware of a few differences between managing Windows and OS X devices. In the current configuration, OS X computers cannot take advantage of software deployments that are advertised as available, rather only required deployments are supported. So while you can provide a self-service portal to your Windows users to come and consume software at their needs, the same feature is not yet available for OS X. Also, while there is endpoint protection available now for Mac computers, it does not integrate with the SCCM console at this time. So for now, you can use SCCM to deploy endpoint protection to a OS X client, but there is no integration built into the SCCM console to manage antivirus policies on OSX computers.
While there are these known differences, it is critical to note that Microsoft has placed importance on providing SCCM 2012 as a single pane of glass to manage configurations across the various devices that exist within an IT environment. I am excited to see the components that are currently enabled and I have heard and I expect to see improvements made and additional features enabled as time goes on in terms of OS X management. If you would like to learn more about System Center Configuration Manager 2012 SP1 or how to setup Apple OS X computer management in your environment, contact New Signature to speak with one of our System Center experts who will be happy to provide further assistance!
April 17, 2013
WordPress Security: How to protect your site from recent brute force attacks and why your password shouldn’t be “password12345”By Zach Azar
As Frederic Lardinois made very clear in his recent blog post on TechCrunch, personal and commercial WordPress sites are under attack. The attacker’s strategy is simple: Keep guessing passwords until one opens the door. This is called a brute force strategy. You simply keep trying until you get it right. Once a password is guessed correctly, the attacker has full access to the backend of a site including the site’s architecture and the content contained in the site.
With a large network of computers at the attacker’s disposal, they can guess thousands of passwords from thousands of different IP addresses. This attack is not extremely technical. It is not “cracking the code” or performing incredibly elaborate hacks on the system. They just guess your username and password.
One reason that WordPress is being targeted is because the attacker knows half of your credentials already. When a new WordPress site is created, the first user has the username “admin.” Assuming you don’t change that name, the attacker already knows one piece of data (your username). Now they just need to guess the password.
This attack isn’t raising awareness that WordPress is faulty or that hackers are code geniuses; it’s reminding us that a major component of our defense against attackers trying to gain access to our websites consists of two strings of characters: your username and password. Thus, we need to take this defense seriously.
Using a strong password is crucial and simple. There are many forms of passwords that are considered strong. New Signature recommends using passphrases instead of passwords. What is the difference you ask? Put simply, a passphrase is a sequence of words that form a sentence with correct grammar, capitalization and punctuation. Passphrases deliver a number of benefits, including: (1) most sentences are naturally quite long, providing outstanding security against brute force attacks; (2) They are simple to create, reducing the burden when you have to change your password on a regular basis; and (3) they are very easy to remember, making it less likely that they will be written down and then compromised (or forgotten). For example, a passphrase could be: In 2013, I chose a new password for my website! This simple to type, and easy to remember passphrase is 47 characters long and super hard to crack! A lot better than trying to remember a random jumble of letters and numbers that many people resort to with long passwords.
If you want to, you can always use difficult passwords made out of random characters and digits, but remember to make these passwords at least 16 characters in length. There are sites which will create and customize these passwords for you securely. Passwords of this type are harder to remember though, so you may want to use tools like LastPass which will store and encrypt your passwords.
With either a passphrase or a password, make sure not to use words or phrases that have a connection to your site (i.e. if your site is about horses, don’t use the word “horse” in your pass phrase). Also, don’t use the same password for multiple websites. If your password is discovered, you don’t want the attacker to now have access to your online banking, email, calendars, etc.
Another major deterrent for these types of attacks is removing the “admin” user. You can simply create another user, make them an “Administrator”, and give them a difficult-to-guess username. Don’t forget to give them a nickname and select to display their name publicly as their nickname. You can now delete the original “admin” user (making sure to attribute all posts created by the admin to your new user or to another user) and voila! All attempts at guessing the “admin” password are useless because the “admin” user doesn’t even exist! Plus if the attacker does come across your new administrative user’s name, you will have created a strong password which will provide a strong defense against guessing attackers.
Congratulations! After following the steps above, your site is significantly stronger against the recent attackers and it only took a few minutes. Remember to update your site as soon as updates are available for WordPress, your plugins, and your theme. Of course if you had difficulty with the steps above, please feel free to contact New Signature for help.
These precautions are only the beginning to keeping your WordPress site secure. Talk with New Signature today if you would like to learn more about:
- Enabling custom two-step login authentication
- Securing administrative users and administrative access
- Utilizing and custom configuring powerful WordPress security plugins
- Securing direct communication channels to the server
- Examining and correcting file permissions for publicly available files
- Scanning entire sites for viruses and malicious code
April 16, 2013By Reed M. Wiedower
Now that Azure IaaS has reached general availability, for customers looking to move to online services such as Office 365, Dynamics CRM Online or Windows Intune, the question always comes up: which identity service should I use? If you followed our earlier explanation of Azure AD or the announcement this week that Azure AD had reached general availability one might be tempted to conclude that Azure AD was the correct identity solution in all cases. Such a decision for many customers would sacrifice the significant investments in on-premises AD that have made configuration and management much easier. And then there’s Active Directory Federation Services (ADFS)…which can help bring the power of federation to existing AD environments. Finally, with Azure IaaS customers can now spin up virtual machine that are domain controllers, extending on-premises AD into the cloud natively. With so many different choices, for many customers, the question remains: which do I choose?
Definition-wise, we should begin by noting the following:
- Regular AD on-premises involves domain controllers (DCs) running inside a corporate network spanning one or many sites
- Azure VMs can run domain controllers, and if connected back via Azure Virtual Networks, can serve as extensions of your existing AD on-premises
- Azure AD, by contrast, cannot “link” to your AD infrastructure except through Active Directory Federation Services (ADFS)
- ADFS can run on virtual machines built within Azure IaaS, meaning that you can combine both DCs and ADFS into instances that are connected to your network
- Very complex organizations may want to implement Forefront Identity Manager (FIM) to help synchronize different line-of-business and system-of-record systems within your organization.
Whew, that’s a mouthful! Fortunately, New Signature has helped organizations of all sizes select the proper identity management solution. We’ve broken down our recommendations into a simple pair of matrices: the first walks through the most common best practices, while the second walks through a feature-by-feature comparison to show which solution is the best.
Identity by Organization Type:
Size of Organization Notes Recommendation < 25 A new organization with no existing infrastructure Skip on-premises AD and go straight to Azure AD; use Windows Intune for endpoint management 25-100 An existing organization with minimal infrastructure (2-3 DCs) Use Azure AD for Online Services, and the new password sync components from Microsoft 100-2000 An existing organization with a single domain but multiple physical sites Use on-premises AD coupled with ADFS running within an Azure VM for maximum uptime 2000+ An existing organization with multiple domains and extensive AD infrastructure Use on-premises AD coupled with Forefront Identity Manager and ADFS spread across multiple sites
Features by Identity Services:
Feature On-Premises Active Directory Azure Active Directory Azure VM running DC role ADFS (either on-premises or in Azure VMs) Notes Single Sign On to Websites Possible if using ADFS as well Built-in Possible if using ADFS as well Built-in SSO is a breeze with ADFS: we recommend running ADFS on Azure VMs to reduce site dependencies if one is not using Azure AD Group Policy Built-in Not possible, yet. Built-in N/A If you want to use group policy, you’ll need to use regular AD or Azure VMs running a DC role. Alternatively, use an endpoint product such as Windows Intune to distribute policies. High Availability Possible if you add two DCs. Built-in. Possible if you add two DCs. Possible if you add multiple roles, to multiple sites. Other than ADFS, the other services are easy to add high availability. ADFS takes more of a lift, especially to span sites. Multiple Domains or Forests Built-in. N/A Built-in Supported Organizations with multiple domains or forests may need FIM for ease of management Support for Office 365, Dynamics CRM Online, Windows Intune Use ADFS. Built-in. Use ADFS. Built-in. If you want plug-and-play access to Microsoft’s online systems, use Azure AD or spin up ADFS.
As you can see, there are a myriad of factors at play, but the larger perspective is simple: small organizations that haven’t made an investment in AD should use Azure AD, while larger, more complex organizations with multiple domains will want to leverage ADFS, and at the highest end of complexity, Forefront Identity Manager, to continue to get the best value for their management needs.
By New Signature
Last Wednesday night NFTE DC held it’s 16th Annual Dare to Dream Gala! It was a magical evening of stories of how entrepreneurship changes lives. Guests were touched by the words of our students, teachers and Locally Grown Honorees, who included Christopher Hertz of New Signature, and inspired by the talented young business people at the Youth Showcase. Under the leadership of Gala Chair Cal Simmons, the event brought together more than 800 people and raised over $440,000 to support the 1,100 students NFTE serves across the Washington region.
New Signature is proud to support NFTE as they help close the opportunity divide be helping increase students’ entrepreneurial knowledge, with the ultimate outcomes of graduation, college attendance, business ownership and/or gainful employment.
By New Signature
PUBLISHED APRIL 16, 2013 on CRN
Microsoft Azure Cloud Service Challenges Amazon On Price, Reliability
By Rick Whiting
The general availability of Microsoft (NSDQ:MSFT)’s Windows Azure Infrastructure Services, which the company described as the final component of its cloud services lineup, puts the company in head-to-head competition with Amazon (NSDQ:AMZN) in the infrastructure-as-a-service market.
Microsoft also said it’s reducing the costs of its virtual machines and cloud services by 21 to 33 percent, promising to match Amazon Web Services (AWS) prices for cloud compute and storage services.
Microsoft partners that have been working with the cloud infrastructure services during its lengthy trial period say the move will help them offer customers lower-cost cloud application and development/testing services that promise higher reliability and uptime than on-premise IT.
“It really does allow you to be more agile as an organization,” said Reed Wiedower, CTO at New Signature, a Washington, D.C.-based solution provider that partners with Microsoft. As for the price cuts: “We’ve seen significant cost reductions across the board in the last one or two years with Azure,” he said.
While Microsoft is a player in the platform-as-a-service and software-as-a-service arenas, the general availability of Windows Azure Infrastructure Services puts Microsoft squarely in the IaaS market. The service allows businesses to move their Windows Server- and SQL Server-based virtual machines running on Microsoft Hyper-V — and the applications running on those VMs — to the cloud.
Microsoft has been providing the Azure IaaS service in preview mode since June. But, the general availability announcement means businesses can subscribe to the Azure IaaS service and get support and service-level agreements (SLAs) of 99.95 percent uptime.
“No one wants their VMs to fail. Effectively, I don’t have to worry that my virtual machines will go down,” Wiedower said, noting that New Signature is an Azure Circle partner and has been working closely with Microsoft on Azure development projects. “The Azure virtual machines are designed from the ground up to be fault tolerant. Microsoft has done a really good job in the last eight or nine months in detailing how their virtual machines work.”
There are currently more than 200,000 Windows Azure customers, according to Microsoft, and about 1.4 million virtual machines have been uploaded to Azure Infrastructure Services since the preview became available. But, Microsoft has a long way to catch up with AWS, which launched in 2006.
In a blog post, Bill Hilf, Microsoft general manager of Azure product management, said the new Azure service offers high-memory 28-GB/4-core and 56-GB/8-core virtual machine instances for running heavy-duty workloads. Also new are validated instances for SQL Server, SharePoint, BizTalk Server and Dynamics NAV, among other Microsoft software.
Wiedower said Microsoft’s new azure service fits with increasing demands he’s seeing for cloud-based development and test services that can cost far less than on-premise test and development projects. And, the New Signature CTO said he’s also seeing more demand from businesses that want to run Microsoft Active Directory in the cloud.
By Reed M. Wiedower
It’s graduation time here at New Signature, and we’re happy to announce that Azure Intrastructure as a Service (IaaS), including Azure Virtual Machines, has graduated to general availability.
In addition to the ability to now spin up highly available virtual machines, the Azure team simultaneously announced many other new features including:
- Pre-built images for popular applications such as SQL Server 2012, SharePoint 2013 and BizTalk Server 2013 to speed provisioning
- Larger virtual machines, including machines with 28gb and even 56gb of memory
- Bigger operating system drives, now up to 127gb in space
- Price drops of 21% to 33% across the entire Azure platform
There are a multitude of workloads that fit with the Azure Virtual Machine model. Many line-of-business applications are perfectly capable of running on Server 2008 or Server 2012, yet have never been migrated because doing so wouldn’t address the underlying dependencies on storage subsystems, networking or hardware. With Windows Azure, even single servers gain the ability to stretch storage and processing both intra and intersites, allowing customers to virtualize these workloads, park them in Azure VMs, and be confident that networking, storage or hardware problems won’t impact their availability.
Another popular cost organizations currently incur that’s a great match for Azure VMs are on-premises virtual machines used for development and testing. In the past, organizations would have to make large capital investments (either dedicated desktops for developers that rapidly lost value, or large virtual hosts that cost more, yet were never fully utilized 24/7 to recover the cost of investment) in order to meet the needs of their developers or system administrators. With Azure, instead of making a large cash outlay, organizations can invest a much smaller amount, provide the ability of developers to self-service their virtual machines as needed, and keep a tight rein on costs. If the organization decides that the money already invested would be better spent on storage, media services or even CDN, with Azure IaaS the VMs can be instantly frozen and the money used to fund those other priorities. By contrast, organizations that have spent large amounts of funds on virtual hosts, only to see them lightly utilized, have no recourse to the dollars already spent.
Finally, the most popular usage of Azure Virtual Machines to date that we’ve seen has been a desire to fully move key infrastructure components into the cloud, including Azure AD, and ADFS running on an Azure Virtual Machine. We’ll detail later this week the steps for organizations looking to migrate directory services into the cloud, and Azure VMs are a big part of that move. As a customer informed me, “Why would I move Exchange and SharePoint to the cloud, yet keep ADFS on-premises?” With Azure Virtual Machines, there’s no need to keep key servers running on-premises.
Interested in moving infrastructure into the public cloud? Looking for cost-savings, greater uptime *and* better flexibility? Talk to New Signature today to see how Azure IaaS can help bring your organization into the cloud.
April 15, 2013
Microsoft’s “GeoFlow” for Excel 2013 Delivers 3D Big Data Visualization and Storytelling Built on Bing MapsBy Christopher HertzLast week I was thrilled to see that Microsoft announced the preview availability of project codename "GeoFlow" for Excel 2013. GeoFlow is an awesome addition to Excel 2013 that lets you plot geographic and temporal data visually, analyze that data in 3D, and create interactive "tours" to share with others. This further builds on the value of Excel 2013 as the most popular and accessible business intelligence tool available. GeoFlow adds to the existing self-service Business Intelligence capabilities in Excel 2013, such as Microsoft Data Explorer Preview and Power View, to help discover and visualize large amounts of data, from Twitter traffic to sales performance to population data in cities around the world. To get started today, download the Add-in for Excel 2013 with Office 365 ProPlus or Office Professional Plus 2013. With GeoFlow, you can:
- Map Data: Plot more than one million rows of data from an Excel workbook, including the Excel Data Model or PowerPivot, in 3D on Bing maps. Choose from columns, heat maps, and bubble visualizations.
- Discover Insights: Discover new insights by seeing your data in geographic space and seeing time-stamped data change over time. Annotate or compare data in a few clicks.
- Share Stories: Capture "scenes" and build cinematic, guided "tours" that can be shared broadly, engaging audiences like never before.