October 4, 2012By Brian Garback
Our customers demand mobility for Dynamics CRM. Microsoft knows this. Other partners know this. But the choice remains cloudy. What are the options? How much will it cost? Is it worth the extra investment? Should I go third party or wait for Microsoft to tell their mobility story? Can I afford to wait? I spent the long weekend in Las Vegas at eXtreme CRM getting to the bottom of this.
First up were 2 days of CWR Mobility training. I was thrilled to be attending, deepening the New Signature partnership with CWR, and expanding our Dynamics CRM story. Deploying and configuring CWR, while characteristically tedious like most technical tasks, could not be simpler. Functionally, the app gives you the basics: records, fields, related records, activities, etc. The challenges I faced were two-fold: a troubling user experience, do to a challenging online/offline modes, and a complex architecture. Rather than a single line of communication from app to Dynamics CRM web services, CWR uses an additional server, which Michael Rich, CWR Technical Sales, says is “more scalable and more secure for enterprise deployments.”
After two long days (and a couple long nights… it is Vegas after all), I was spinning a bit. The product demos of CWR were solid. Their staff is fantastic. But how do I sell this to my customers knowing New Signature demands an incredible customer experience? Before we focused on CWR, I had heard through the grapevine of two other Dynamics CRM Mobile providers: TenDigits and Resco. I had to dig in.
I looked around for TenDigits,but the rumor is they have stopped development after Microsoft Dynamics CRM Mobile was announced and were therefore not at the conference. I called both their sales and service lines repeatedly and never got through.The last bit of news on their site is from nearly a year ago. Dead end.
Next option: Resco. I had heard good things, but given our blossoming relationship with CWR, I did not look into it. There they were, a booth full of technologies and technologists. Ivan Stano and Lukas Lesko were phenomenal. I tried out their Android app in the booth. Intrigued, I downloaded the iPhone app to my 4s, connected it to the New Signature instance and it worked like a charm! I have some concerns about the mapping features (requires longitude and latitude), but in all, I am a happy man.
So let’s see how these solutions stack up:
The mobility space is an exciting one where the only constant in change. This summer it seemed CWR would be the only player in town. Now we have two great solutions. The edge goes to Resco for now, but there’s still a place for CWR if native maps or Blackberries are required. New Signature is looking forward to working with both of these great companies and continuing to bring excellent mobile CRM solutions to market!
Note: New Signature made revisions to this article. CWR Mobility is $22.50 per user per month, not $30 as originally stated. Also, CWR supports not only BlackBerry, but also iPhone, iPad, Windows Phone 7, Android Phone, Android Tablet, and soon Windows 8 devices.
October 2, 2012By Ben Pahl
Exchange 2013 includes a monumentally important feature to improve the resiliency of the platform – Managed Availability. In short, this baked-in system monitors for service availability and performs recovery actions automatically when a problem is detected.
Surprisingly, last week at the Microsoft Exchange Conference (MEC) the 1st question the Exchange Team got in the breakout session about Managed Availability was:
“Sounds great, but how do I turn it off?”
The Microsoft Exchange PM’s leading the session were quite surprised to get this question repeatedly (4 different people asked the same question throughout the session.) To understand why Managed Availability is so integral and important, we have to dive into a bit more detail.
Managed Availability is a set of services that run on all Exchange 2013 servers. Each server monitors the health of itself and will take prescribed actions to fix problems. This is a break from previous Exchange releases which may have allowed 3rd party tools to pull monitoring information, but Exchange took no action when a monitor indicated a problem.
If the problem isn’t fixed by the first action taken, depending on the problem, heightening actions will be automatically
implemented. The ‘ultimate’ action would be the server taking itself offline so other nodes in the environment can pick up its workloads.
2 thoughts immediately pop into an Exchange Admin’s head upon learning this information:
- My Exchange server shouldn’t reboot when the unused POP service won’t start!
- How do I know my independent server’s response actions won’t cause a cascading failure that spirals out of control?
Let’s explore a few more aspects of Managed Availability before addressing 1. And 2.
Each server node monitors itself. Who monitors the monitoring on each server?
Each Exchange server’s monitoring service is monitored by up to 3 neighboring Exchange servers, which pick their neighbors automatically. If an Exchange server’s neighbor sees its monitoring service is offline, it will automatically try to restart it. If that fails, additional action will be taken up to and including taking the server with the failed monitoring service out of production.
Each monitoring service has a master service and worker services. The master service spins up and tears down the worker processes on demand to perform specific tasks. The master/worker model for monitoring services (reminds us of the new Master/Worker model for Store) prevents the failure of one monitoring task to stop all other monitoring.
In addition to checking service up/down status, thresholds have been carefully calibrated for real world end-user monitoring. Things like timeouts for logging into OWA, composing and sending a message are checked to replicate real-world user actions. After all, if OWA is up but it takes 10 minutes to compose a new message, that isn’t really ‘up’ as far as your staff are concerned.
Response actions – this was an area Microsoft spent quite a bit of time researching, measuring and calibrating. In an earlier post Exchange 2013 – Simple architecture and resiliency via self-healing components we touched on the compartmentalization of Exchange nodes. This allowed many recovery actions to be simply accomplished by restarting a service, failing over a mailbox database or restarting a server. If Managed Availability detects a problem preventing users from accessing their mailboxes via normal access methods (OWA, Outlook, e.g.) it will fail the mailbox database to another node IF that node is healthier. If no healthier node is available, it will fail over to an equally healthy node if absolutely necessary. It will not fail over to another server that is *less* healthy. CAS nodes are now essentially
stateless, so if there are other healthy CAS’ when a node is removed, services will be failed over by a front end load balancer.
Very few situations will result in an Exchange server automatically rebooting. Auto-rebooting isn’t new to Windows or Exchange, but there are a few more situations that will cause this now. For instance, if the volume that holds active Exchange databases is removed from the operating system or Exchange is unable to write to that volume for a sustained amount of time, the server will reboot.
In addition to the knowing the health status of other servers in the environment, Exchange won’t fail over or restart services endlessly. Rather, it will either slow down or stop repeated recovery/failover actions to prevent cascading failures in the environment.
And lastly, Exchange Managed Availability will escalate to a human via System Center Operations Manager or logged events if a problem can’t be self-healed. Higher level recovery, configurations and options (auto-reseeding, auto-dag-network configuration, e.g.) which allow Managed Availability to operate around clearly defined fault zones may be explored in a separate post in the future.
So, back to our Exchange Admin’s questions.
- Response actions are proportionate and carefully calibrated to ensure minor symptoms get minor responses and major symptoms get major responses
- Exchange servers are aware of their own health as well as other node health states. Response actions are
intelligently designed to make the situation *more* healthy, not less. Repeated problems will cause the environment to slow down or stop response actions altogether
And then the ultimate question: “How do I turn it off?”
You can change monitoring thresholds and response actions if your environment is extremely unique, even turning off response actions. But of course:
We can all sleep better knowing our Exchange 2013 servers are intelligent enough to self-heal problems, large and small.
October 1, 2012By Ben Pahl
MEC 2012 brought us news of a ton of exciting new features and functionality in Exchange 2013 on premise and in Office365.
It quickly became clear from conversations with the Exchange team that Office365 is the best thing that’s ever happened to Exchange design and development. The very same people responsible for designing and building Exchange now run one of the biggest Exchange deployments the world has seen. A lot of insight into fault domains was gained from the operation and management of Office365.
The Exchange 2013 environment is built to self-heal from the smallest component level all the way up. Dependencies are reduced, components are combined and configuration options are set out of the box to optimize resiliency for small all the way up to enterprise size environments.
Why should an Exchange Admin manually perform actions to failover, or repair small problems? When properly designed, an Exchange environment’s failover and recovery options can be pre-defined and in 99% of cases don’t need a human to decide what action to take to restore service. If OWA isn’t working on a node, it’s a safe bet resetting the IIS application pool will fix the issue for example.
Exchange 2013 is built around the idea that single components should be compartmentalized so their failure has a minimal effect on the whole environment. Changes to the entire Exchange stack make this compartmentalization a reality, and Managed Availability makes the entire system capable of intelligently self-healing. By building Managed Availability into the product the Exchange team ensured every Exchange 2013 environment has these features out of the box, not just shops that use additional monitoring and recovering products.
We’ll be exploring some of these new features (and additional features!) and design changes in more detail soon.
September 30, 2012By Parvinder Randev
Let me start by acknowledging the team of authors (Rand Morimoto, Michael Noel, Guy Yardeni, Omar Droubi, Andrew Abbate and Chris Amaris) for a well crafted work based on their real-world experience working with Windows Server 2012. They provided a wealth of information that will be extremely valuable in the planning, designing, deployment, implementation, and migration of IT infrastructure for clients. Every section of the book is filled with knowledge gleaned from first-hand experience with Windows Server 2012, including operating in a live production environment as well as best practices that will help Consultants and Architects who will design and deploy Windows 2012 solutions.
Unlike dryer technical works, “Windows Server 2012 Unleashed” was quite engaging, from the first page until the conclusion…1496 pages later. As a lengthy work, it’s divided into 11 parts (36 chapters) with core knowledge included in each one, as well as overall best practices. The book clearly depicts how Windows Server 2012 focuses on datacenter and cloud-based back-end infrastructure. The group of authors have well maintained the focus of the book to convey what’s new, what’s the same and what’s different in Windows 2012 compared to earlier server versions. This allows every chapter to stand on its own as a useful collection of information.
The book begins with new concepts introduced in Windows Server 2012 such as Self-Healing NTFS, Server Message Block 3.0, Hyper-V, Storage Spaces, D-Dupe, Visual Changes in Windows Server 2012, Windows Server 2012 as Application Server, IIS support for Multitenant, Cluster-Aware Updating, Windows Server 2012 Active Directory, Global Catalog Cloning, Managed Service Accounts, Authentication Mechanism assurance and Offline domain join. Yes, it’s a mouthful, and one sure to keep every system administrator happy for quite some time.
There’s also a strong focus on the security enhancements in Windows Server 2012 which includes increased support for security standards, enhancements in Windows Security subsystems, options to leverage Windows 2012 Server core, Dynamic Access Control for flexibility in role-based security, DNS Security Extensions (DNSSEC) and zone signing for network protection, transport security using IPSEC/Certificates, security and management policy enforcement, Bitlocker for server security and Rights Management Services (RMS) for data leakage protection.
Remote access technologies available in Windows Server 2012 are also detailed in depth for “Work Anywhere” scenarios including Direct Access, RODC’s for branch offices, Branch Cache file access, Remote Desktop Services for thin client access and Windows to GO. I especially enjoyed the detailed explanation on IPAM (IP Address Management tool), Performance and Reliability Monitoring tools, Best Practices Analyzer, WDS Integration and DFS-R.
As a detailed reference, “Windows Server 2010 Unleashed” can’t be beat. Every system administrator currently (or looking to) supporting Windows Server 2012 should have it on their bookshelf.
September 28, 2012By New Signature and Parvinder Randev
Microsoft Lync Server 2010 Unleashed provides a great readable way to understand Lync Server 2010. Lync Server 2010 is a big product, with lots of abilities and features. This book is a great way to get to grips with deployment and administration.
While reading this book I was able to get a working standard edition install running in my test lab in a few hours. The info provided in the book is accurate and helpful.
Alex, Andrew and Tom have truly leveraged their tremendous amount of experience with Microsoft UC/UM products in this book. They have identified common mistakes and presented proven solutions and workarounds. Simply put, this book tells you what works–and shows you how to make it work.
The book combines theory, step-by-step configuration instructions, and best practices from real enterprise environments. It brings together “in-the-trenches” guidance for all facets of planning, integration, deployment, and administration, from expert consultants who’ve spent years implementing Microsoft Unified Communications solutions.
I recommend this book to the system admins who plan to upgrade their skills and infrastructure to Lync Server 2010.
September 25, 2012By New Signature and Bryan Hackett
New Signature is pleased to announce the launch of Ask Herzl.com.
Ask Herzl, a project of the Israel on Campus Coalition, is a central hub for ready-to-run Israel programming, a venue to share best practices in the form of How-To Guides, and a place to search for the right speakers for your events. It also connects students and professionals around the country who are working on similar initiatives and facing similar challenges to collaborate and share ideas.
Ask Herzl looks to address a core challenge for college organizations, “How do we maintain knowledge and sustain program momentum after the student leaders graduate?” New Signature performed extensive user research to help understand the requirements of this problem space. The discovery phase culminated in a set of functional wire frames that allowed the ICC team to visualize the proposed solution.
Our technical team evaluated a host of options to implement the website. New Signature recommended that Ask Herzl.com be implemented using the Ruby on Rails web application framework. This development technology allowed for rapid iteration and reduced development timelines, moving the project from approved design to production application in under 8 weeks.
New Signature provides a range of Strategy, Design, and Development services. Please contact us if you are interested in learning how we can work together.
By Tzeitel Sorrosa
So who was Theodor Herzl anyways? That was just one aspect of who we tried to convey, as we executed the design of the Ask Herzl website with fresh new goggles. We approached Israel on Campus’ goal of building an online community for Israel activists, with the uncommon in mind. In search of a meaningful story, we went back in time to the 1800’s, wearing modern-day blue jeans and insouciance flip-flop sandals. The color pallet is a fusion between old-fashion neutrals and vivid aqua tones that enhance the lithographic-style illustrations. To speak to our audience, we became the energetic, new generation Israel advocates wearing a college sweatshirt that yelled “I am Israel’s voice on campus and beyond.” Rustic, hand-drawn illustrations of Israel landmarks that mimicked the Ask Herzl logo personality, were to be permanently displayed throughout the website; while a caricature of Herzl suspended on a hot air balloon looks through binoculars in search of new frontiers. The creative execution and design was a love at first site for our client, Israel on Campus, and for the team, as we captured the spirit and voice of our audience with a sense of individuality and lightheartedness. — We sure hope Mr. Theodor Herzl is applauding as well, wherever he may be!
September 21, 2012By Peter Day
Continuing our look into Windows Server 2012′s new security features, today we’ll focus on a critical component: auditing.
Auditing is about keeping records of events for later reference. It gives you the ability to determine who did what actions to which data, and who did not. Auditing is also an often overlooked aspect of computer network security. The first goal of network security is always to prevent security compromises from occurring. A second goal is to detect incidents if they do happen, and a third is to learn from what happened to try and prevent it occurring again. Auditing is what helps you achieve the second and third goal, but only if you set it up properly first before an incident happens.
How do I start auditing in Windows Server 2012?
You can configure audit policies within a Group Policy Object (GPO) and then assign that GPO to part or all of the network using the Group Policy Management tool. In Windows Server 2012 the location of all the policies I will discuss in this article is at the following place within a Group Policy Object:
Computer configuration \Policies \Windows Settings \Security Settings \Advanced Audit Policy Configuration
Deciding to audit events is the easy choice; the harder choice is deciding what to audit. If you audit too much then the useful data becomes more difficult to find amongst the wash of event logs. The best approach is to start small, auditing just a few items, then build up additional auditing if it is truly needed. Here are three GPO settings you can start with:
Setting One: ”Audit user account management” in the Account Management policy
Setting this option will write an event to the security log whenever certain edits are made to user accounts, including the creation and deletion of accounts. It will also alert you to any changes in permissions for network administration level user accounts. Combined with setting three below this is a useful way to track changes in authorization levels for all accounts and especially the admin accounts.
Setting Two: “Audit logon” in the Logon / Logoff policy
This option will write an event to the security log whenever a user logs on. For the full picture you should check the boxes to audit both successful and unsuccessful logon attempts. Large numbers of unsuccessful logon attempts can indicate the presence of an attack on your network.
Setting Three: “Audit Security Group Management” in the Account Management policy
An event can be written to the security log every time an edit is made to any security group. You should check the boxes to audit both successful and unsuccessful group management attempts. Auditing this allows you to see if someone is trying to grant themselves additional security rights without authorization to do so.
What about more advanced auditing in Server 2012?
Windows Server 2012 also provides some extremely flexible options for defining audit policies when you configure the “Global Object Access Auditing” policy within a GPO. With the Global Object Access Auditing policy you can choose to monitor not just file access success or failure but also what actions were carried out or attempted on the file – such as read, write, delete, change file permissions and so on. You can narrow down the scope of the file auditing to specific users or groups of users. This is the same flexibility of policy definition that we saw in our earlier post on Dynamic Access Control.
For example you can create a group policy, assign it to the domain in the Group Policy Management tool, and have it alert you whenever a member of the group “staff” attempts to delete files or folders without the authorization to do so. These setting are shown in the following screenshot:
Note that for the file auditing to work you also have to enable the “audit file system” setting in the Object Access policy as shown in the picture above.
So, Windows Server 2012 allows you to be very precise in the events you choose to audit and log – no longer do you have to log everything in the hope of catching, and more importantly hoping to find, the information you actually need.
What about the logs?
Auditing is most effective when the logs are reviewed on a regular basis. In a network with many servers to be checked you can configure the logs to all be forwarded to one server where you can install software to mine them for information – this is something we’ll be covering in a later blog post.
Now that you are auditing multiple items you can expect the security log to grow faster – by default the log will overwrite old events when it runs out of space. If you don’t want this to happen then right-click on the security log and choose Properties and select the option “Archive the log when full, do not overwrite events”. As long as you have enough disk space this could give you months or even years’ worth of audit log history, which can be vital when investigating a long running issue.
Is there anything else to be aware of?
As important as auditing is, it is equally important to remember one of its limitations: auditing may tell you that a user, John Doe, attempted to access restricted files in the finance department share, however all that really tells you is that someone using the account “John Doe” attempted to access those files. Whether it was in fact the real John Doe or someone else using his network account is a much harder question to answer with certainty. This is important to remember before taking action on the results of auditing. A way to reduce this limitation would be to use two-factor authentication to logon. That will make it more likely that the real owner of the account was the person who used it – especially if you use a fingerprint as one of the authentication factors.
Where can I find out more?
New Signature has years of experience configuring security on Windows Server 2008 and Server 2012 environments. Please give us a call – we would be happy to work with you to review your auditing needs and draw up an implementation plan.
September 18, 2012By New Signature
The International Academy of the Visual Arts announced today that New Signature designed and built websites won four awards in the 2012 W3 Awards. Receiving over 3,000 entries, the W3 Awards honors outstanding Websites, Web Marketing, Web Video, & Mobile Apps created by some of the best interactive agencies, designers, and creators worldwide. New Signature’s winning websites include: Digital Learning Now! (Gold in Non-Profit), Mobile Commons (Silver in Marketing), Working America (Silver in Non-Profit), and the Stanford Center for Internet and Society (Silver in School/University).
“We are honored to once again recognize creative excellence on the Web, and are humbled to witness all the amazing work being done throughout the industry” said Linda Day, Executive Director of the IAVA. ”From everyone at the Academy, we congratulate our 2012 W³ Award entrants and winners for their contributions and commitment to the online world in which we live.”
New Signature is excited to have delivered strategic thinking, distinctive design, and emerging technologies to create and produce innovative websites and applications that have won top honors at the W3 awards for our customers in 2012. Our process driven approach is specifically geared to taking on complex projects that require substantial creativity, strategic vision and stellar technology expertise.
September 4, 2012By Peter Day
Now that Windows Server 2012 officially launched today, we’ve begun to answer many questions from customers about the new features. Today we’ll review one of the key new security features in the product: Dynamic Access Control.
What is Dynamic Access Control?
Dynamic Access Control allows you to set authorization policies across one or more file servers that determine access to resources based on multiple attributes of a user or computer object. For example, you can restrict access to documents containing Social Security Numbers to users whose “department” attribute is set to “HR”. Another
improvement in Dynamic Access Control is that it allows you to specify that a user must be a member of 2 (or more) specific groups in order to be authorized to access a file – you could not do that with NTFS permissions under Server 2008.
What about NTFS and Share permissions?
NTFS and share permissions are still very much alive and in use. When making an authorization decision Windows Server 2012 will consider NTFS permissions and any share permissions, as well as any settings that arise from the Dynamic Access Control configuration.
What are the requirements?
In addition to the configuration described below here are the operating system requirements:
- The file server with the protected data must be running Windows Server 2012.
- There must be at least one Windows Server 2012 Domain Controller.
- Windows 8 is required if you want use attributes of the computer object to define the protection.
- Note that you do not have to raise the Forest or Domain Functional Level to be Server 2012, so you can still use Server 2008 Domain Controllers.
What’s a claim?
A claim is a statement about an attribute of a user or computer object that is made by a trusted entity – specifically by a Windows Server 2012 Domain Controller. In the AD Administrative Console (ADAC) you will find that the Claim Type container under the Dynamic Access Control node is empty by default. You create a claim type by giving it a name and linking it to an Active Directory attribute of a computer or user object type. There can be more than one claim made about any specific user or computer. By using claims you can base authorization decisions on many more attributes that just the SID or security group membership attributes that were supported in earlier versions of Windows. For example you can define what data can be accessed based on the country of residence attribute recorded in the user object.
What about the resources to be secured?
The next step is to enable or define the properties of resources that you will later be using in the authorization rules. This is done within the “resource properties” container under the Dynamic Access Control section of the ADAC. For example, you can enable the property “department” to record the department to which a particular resource (e.g. a document) belongs.
How do you apply the settings?
A Central Access Policy contains rules that you define called Central Access Rules, and the policy is applied using the Group Policy infrastructure. A Central Access Policy is not mandatory for an implementation of Dynamic Access Control but it does allow for the consistent application of settings across multiple servers. A Central Access Rule defines a group of objects (e.g. all users with a number “5″ in their license plate) and grants a permission (e.g. read-only) to those objects for a defined group of resources. By default the rules apply to all resources, but you could limit the resources to which the rule will apply. For example, we can grant “read-only” rights to all domain users with a “5″ in their license plate attribute to all documents with a value of “human resources” in their “department” attribute. Once the rule is defined you can choose to apply it live or you can choose to use a “staging” mode. In staging mode the rule is evaluated when a request to access a protected resource is made but is not applied – instead an audit event is written detailing what the effect of the rule would have been had it been applied.
Why is accuracy essential?
We can see from the above example that the accuracy of the properties of the objects (e.g. “license plate” and “department”) is essential to the proper functioning of the rule. Now that any user object attribute may have an affect on authorization to access files you may have to review your policies on whom you allow to edit the attributes of user and computer objects in AD. The accuracy of other attributes of a user or computer object now becomes as important as the accuracy of setting the correct security group membership has always been.
What about file classification?
The classification of files is the population of the resource attributes that you already defined. This classification is usually done manually but it is possible to provide some automated file classification. For example you could scan files for strings that match a Social security Number, xxx-xx-xxxx, and then classify those files as “HR” or “sensitive”. Later you can use the assigned file classification in your Central Access Policy to define which values of user attributes are needed by a user to access a specific classification of documents.
What is Access Denied Assistance?
You can define in detail the error messages that a user will see when their access to a resource is denied. In that situation you can also give the user options for remediation of the error – such as redirecting them to a self-remediation website, or giving them the option of sending an email request to the owner of the data to request access.
How do I implement Dynamic Access Control?
Here’s an outline of the steps involved:
- Create one or more claim types.
- Enable one or more resource properties.
- Create Central Access Rules using the claim type(s) and resource properties.
- Add the rules to a Central Access Policy and configure the permissions in “staging” mode.
- Use Group Policy to deploy the Central Access Policy to server(s).
- Test users’ access to the protected data and confirm the authorization is working as expected by checking the audit events that are generated.
- Once testing is successful take the Central Access Policy out of “staging” mode so that its settings are now live.
Is Dynamic Access Control for me?
As you can see, Dynamic Access Control is a powerful yet non-trivial set of technologies that requires adequate planning and testing to successfully implement. Before setting up the required infrastructure you should review your business needs to determine whether or not Dynamic Access Control is suited to meeting them. Give New Signature a call if you have any questions or concerns about these technologies and whether they can work for you.