Daily Archives

Articles indexed Friday September 7 2012

Page 12/17 | < Previous Page | 8 9 10 11 12 13 14 15 16 17  | Next Page >

  • APress Deal of the Day - 6/Sep/2012 - Pro Access 2010 Development

    - by TATWORTH
    Today's $10 deal of the day from APress at http://www.apress.com/9781430235781 is Pro Access 2010 Development"Pro Access 2010 Development is a fundamental resource for developing business applications that take advantage of the features of Access 2010. You'll learn how to build database applications, create Web-based databases, develop macros and VBA tools for Access applications, integrate Access with SharePoint, and much more."

    Read the article

  • SQL Server Table Polling by Multiple Subscribers

    - by Daniel Hester
    Background Designing Stored Procedures that are safe for multiple subscribers (to call simultaneously) can be challenging.  For example let’s say that you want multiple worker processes to poll a shared work queue that’s encapsulated as a SQL Table. This is a common scenario and through experience you’ll find that you want to use Table Hints to prevent unwanted locking when performing simultaneous queries on the same table. There are three table hints to consider: NOLOCK, READPAST and UPDLOCK. Both NOLOCK and READPAST table hints allow you to SELECT from a table without placing a LOCK on that table. However, SELECTs with the READPAST hint will ignore any records that are locked due to being updated/inserted (or otherwise “dirty”), whereas a SELECT with NOLOCK ignores all locks including dirty reads. For the initial update of the flag (that marks the record as available for subscription) I don’t use the NOLOCK Table Hint because I want to be sensitive to the “active” records in the table and I want to exclude them.  I use an Update Lock (UPDLOCK) in conjunction with a WHERE clause that uses a sub-select with a READPAST Table Hint in order to explicitly lock the records I’m updating (UPDLOCK) but not place a lock on the table when selecting the records that I’m going to update (READPAST). UPDATES should be allowed to lock the rows affected because we’re probably changing a flag on a record so that it is not included in a SELECT from another subscriber. On the UPDATE statement we should explicitly use the UPDLOCK to guard against lock escalation. A SELECT to check for the next record(s) to process can result in a shared read lock being held by more than one subscriber polling the shared work queue (SQL table). It is expected that more than one worker process (or server) might try to process the same new record(s) at the same time. When each process then tries to obtain the update lock, none of them can because another process has a shared read lock in place. Thus without the UPDLOCK hint the result would be a lock escalation deadlock; however with the UPDLOCK hint this condition is mitigated against. Note that using the READPAST table hint requires that you also set the ISOLATION LEVEL of the transaction to be READ COMMITTED (rather than the default of SERIALIZABLE). Guidance In the Stored Procedure that returns records to the multiple subscribers: Perform the UPDATE first. Change the flag that makes the record available to subscribers.  Additionally, you may want to update a LastUpdated datetime field in order to be able to check for records that “got stuck” in an intermediate state or for other auditing purposes. In the UPDATE statement use the (UPDLOCK) Table Hint on the UPDATE statement to prevent lock escalation. In the UPDATE statement also use a WHERE Clause that uses a sub-select with a (READPAST) Table Hint to select the records that you’re going to update. In the UPDATE statement use the OUTPUT clause in conjunction with a Temporary Table to isolate the record(s) that you’ve just updated and intend to return to the subscriber. This is the fastest way to update the record(s) and to get the records’ identifiers within the same operation. Finally do a set-based SELECT on the main Table (using the Temporary Table to identify the records in the set) with either a READPAST or NOLOCK table hint.  Use NOLOCK if there are other processes (besides the multiple subscribers) that might be changing the data that you want to return to the multiple subscribers; or use READPAST if you're sure there are no other processes (besides the multiple subscribers) that might be updating column data in the table for other purposes (e.g. changes to a person’s last name).  NOLOCK is generally the better fit in this part of the scenario. See the following as an example: CREATE PROCEDURE [dbo].[usp_NewCustomersSelect] AS BEGIN -- OVERRIDE THE DEFAULT ISOLATION LEVEL SET TRANSACTION ISOLATION LEVEL READ COMMITTED -- SET NOCOUNT ON SET NOCOUNT ON -- DECLARE TEMP TABLE -- Note that this example uses CustomerId as an identifier; -- you could just use the Identity column Id if that’s all you need. DECLARE @CustomersTempTable TABLE ( CustomerId NVARCHAR(255) ) -- PERFORM UPDATE FIRST -- [Customers] is the name of the table -- [Id] is the Identity Column on the table -- [CustomerId] is the business document key used to identify the -- record globally, i.e. in other systems or across SQL tables -- [Status] is INT or BIT field (if the status is a binary state) -- [LastUpdated] is a datetime field used to record the time of the -- last update UPDATE [Customers] WITH (UPDLOCK) SET [Status] = 1, [LastUpdated] = GETDATE() OUTPUT [INSERTED].[CustomerId] INTO @CustomersTempTable WHERE ([Id] = (SELECT TOP 100 [Id] FROM [Customers] WITH (READPAST) WHERE ([Status] = 0) ORDER BY [Id] ASC)) -- PERFORM SELECT FROM ENTITY TABLE SELECT [C].[CustomerId], [C].[FirstName], [C].[LastName], [C].[Address1], [C].[Address2], [C].[City], [C].[State], [C].[Zip], [C].[ShippingMethod], [C].[Id] FROM [Customers] AS [C] WITH (NOLOCK), @CustomersTempTable AS [TEMP] WHERE ([C].[CustomerId] = [TEMP].[CustomerId]) END In a system that has been designed to have multiple status values for records that need to be processed in the Work Queue it is necessary to have a “Watch Dog” process by which “stale” records in intermediate states (such as “In Progress”) are detected, i.e. a [Status] of 0 = New or Unprocessed; a [Status] of 1 = In Progress; a [Status] of 2 = Processed; etc.. Thus, if you have a business rule that states that the application should only process new records if all of the old records have been processed successfully (or marked as an error), then it will be necessary to build a monitoring process to detect stalled or stale records in the Work Queue, hence the use of the LastUpdated column in the example above. The Status field along with the LastUpdated field can be used as the criteria to detect stalled / stale records. It is possible to put this watchdog logic into the stored procedure above, but I would recommend making it a separate monitoring function. In writing the stored procedure that checks for stale records I would recommend using the same kind of lock semantics as suggested above. The example below looks for records that have been in the “In Progress” state ([Status] = 1) for greater than 60 seconds: CREATE PROCEDURE [dbo].[usp_NewCustomersWatchDog] AS BEGIN -- TO OVERRIDE THE DEFAULT ISOLATION LEVEL SET TRANSACTION ISOLATION LEVEL READ COMMITTED -- SET NOCOUNT ON SET NOCOUNT ON DECLARE @MaxWait int; SET @MaxWait = 60 IF EXISTS (SELECT 1 FROM [dbo].[Customers] WITH (READPAST) WHERE ([Status] = 1) AND (DATEDIFF(s, [LastUpdated], GETDATE()) > @MaxWait)) BEGIN SELECT 1 AS [IsWatchDogError] END ELSE BEGIN SELECT 0 AS [IsWatchDogError] END END Downloads The zip file below contains two SQL scripts: one to create a sample database with the above stored procedures and one to populate the sample database with 10,000 sample records.  I am very grateful to Red-Gate software for their excellent SQL Data Generator tool which enabled me to create these sample records in no time at all. References http://msdn.microsoft.com/en-us/library/ms187373.aspx http://www.techrepublic.com/article/using-nolock-and-readpast-table-hints-in-sql-server/6185492 http://geekswithblogs.net/gwiele/archive/2004/11/25/15974.aspx http://grounding.co.za/blogs/romiko/archive/2009/03/09/biztalk-sql-receive-location-deadlocks-dirty-reads-and-isolation-levels.aspx

    Read the article

  • Adventures in Windows 8: Understanding and debugging design time data in Expression Blend

    - by Laurent Bugnion
    One of my favorite features in Expression Blend is the ability to attach a Visual Studio debugger to Blend. First let’s start by answering the question: why exactly do you want to do that? Note: If you are familiar with the creation and usage of design time data, feel free to scroll down to the paragraph titled “When design time data fails”. Creating design time data for your app When a designer works on an app, he needs to see something to design. For “static” UI such as buttons, backgrounds, etc, the user interface elements are going to show up in Blend just fine. If however the data is fetched dynamically from a service (web, database, etc) or created dynamically, most probably Blend is going to show just an empty element. The classical way to design at that stage is to run the application, navigate to the screen that is under construction (which can involve delays, need to log in, etc…), to measure what is on the screen (colors, margins, width and height, etc) using various tools, going back to Blend, editing the properties of the elements, running again, etc. Obviously this is not ideal. The solution is to create design time data. For more information about the creation of design time data by mocking services, you can refer to two talks of mine “Deep dive MVVM” and “MVVM Applied From Silverlight to Windows Phone to Windows 8”. The source code for these talks is here and here. Design time data in MVVM Light One of the main reasons why I developed MVVM Light is to facilitate the creation of design time data. To illustrate this, let’s create a new MVVM Light application in Visual Studio. Install MVVM Light from here: http://mvvmlight.codeplex.com (use the MSI in the Download section). After installing, make sure to read the Readme that opens up in your favorite browser, you will need one more step to install the Project Templates. Start Visual Studio 2012. Create a new MvvmLight (Win8) app. Run the application. You will see a string showing “Welcome to MVVM Light”. In the Solution explorer, right click on MainPage.xaml and select Open in Blend. Now you should see “Welcome to MVVM Light [Design]” What happens here is that Expression Blend runs different code at design time than the application runs at runtime. To do this, we use design-time detection (as explained in a previous article) and use that information to initialize a different data service at design time. To understand this better, open the ViewModelLocator.cs file in the ViewModel folder and see how the DesignDataService is used at design time, while the DataService is used at runtime. In a real-life applicationm, DataService would be used to connect to a web service, for instance. When design time data fails Sometimes however, the creation of design time data fails. It can be very difficult to understand exactly what is happening. Expression Blend is not giving a lot of information about what happened. Thankfully, we can use a trick: Attaching a debugger to Expression Blend and debug the design time code. In WPF and Silverlight (including Windows Phone 7), you could simply attach the debugger to Blend.exe (using the “Managed (v4.5, v4.0) code” option even for Silverlight!!) In Windows 8 however, things are just a bit different. This is because the designer that renders the actual representation of the Windows 8 app runs in its own process. Let’s illustrate that: Open the file DesignDataService in the Design folder. Modify the GetData method to look like this: public void GetData(Action<DataItem, Exception> callback) { throw new Exception(); // Use this to create design time data var item = new DataItem("Welcome to MVVM Light [design]"); callback(item, null); } Go to Blend and build the application. The build succeeds, but now the page is empty. The creation of the design time data failed, but we don’t get a warning message. We need to investigate what’s wrong. Close MainPage.xaml Go to Visual Studio and select the menu Debug, Attach to Process. Update: Make sure that you select “Managed (v4.5, v4.0) code” in the “Attach to” field. Find the process named XDesProc.exe. You should have at least two, one for the Visual Studio 2012 designer surface, and one for Expression Blend. Unfortunately in this screen it is not obvious which is which. Let’s find out in the Task Manager. Press Ctrl-Alt-Del and select Task Manager Go to the Details tab and sort the processes by name. Find the one that says “Blend for Microsoft Visual Studio 2012 XAML UI Designer” and write down the process ID. Go back to the Attach to Process dialog in Visual Studio. sort the processes by ID and attach the debugger to the correct instance of XDesProc.exe. Open the MainViewModel (in the ViewModel folder) Place a breakpoint on the first line of the MainViewModel constructor. Go to Blend and open the MainPage.xaml again. At this point, the debugger breaks in Visual Studio and you can execute your code step by step. Simply step inside the dataservice call, and find the exception that you had placed there. Visual Studio gives you additional information which helps you to solve the issue. More info and Conclusion I want to thank the amazing people on the Expression Blend team for being very fast in guiding me in that matter and encouraging me to blog about it. More information about the XDesProc.exe process can be found here. I had to work on a Windows 8 app for a few days without design time data because of an Exception thrown somewhere in the code, and it was really painful. With the debugger, finding the issue was a simple matter of stepping into the code until it threw the exception.   Laurent Bugnion (GalaSoft) Subscribe | Twitter | Facebook | Flickr | LinkedIn

    Read the article

  • Nokia Lumia 920 Windows Phone 8 Announcement

    - by Tim Murphy
    Today Nokia and Microsoft had an event to officially introduce the Lumia 920.  Below is a rundown of some of the things I found interesting. As a person who likes photography there was a lot to drool over.  The main feature that caught my attention was PureView with its optical stabilization.  This alone should improve the majority of you pictures.  Add to that the SmartShoot Object remover that uses multiple images to remove unwanted people or objects that move through your picture and you never have to accept reality again. For the most part the lenses concept introduced in Windows Phone 8 just makes the usability of leveraging camera better.  Of course that is Microsoft’s selling point.  One lens that caught my attention was the Bing lens.  I have to say it is about time that we can take pictures and use them to search for answers using Bing. There were a couple of features shown that involved augmented reality.  One was similar to the yapf application that is already in the market which overlays restaurants and other destination over live camera views.  The other was using the navigation directions with a live view. Then you get down to some of the physical features of the Lumia 920.  The one that got the most stage time is that it has a great 2000mah battery which can be charged wirelessly.  They also pointed out the improved glare reduction of the 4.5 in. curved glass screen.  This hardware improvement is improved further with software that detects glare conditions and adjusts the display attributes to enhance viewing ease. Adding to the wireless cool factor of the Lumia 920 is the general NFC capabilities.  This was demonstrated with NFC docking stations as well as JBL speakers and headphones. There was one more hardware feature that I applauded.  The super sensitive touch screen did away with one of my pet peeves with capacitive touch screens.  You will never have to remove you gloves to operate your phone again.  The mittens that they did the demo with looked more like boxing gloves. I was disappointed with Joe Belfiore said that they were only going to show a couple of new features of the Windows Phone 8 and would hear more at future events.  One of the things he did show is the ability to customize which buttons you preferred as defaults in IE10.  For example you could have the folders button where the refresh button normally is.  He also showed that at long last you can natively take screenshots on your phone.  Hopefully he will be back quickly to give us the rest of the features. The most disappointing part of the event was that we never found out when they would be released or how much they would cost.  Let’s hope this comes soon.  Even with these couple of items still left on my wish list I can’t wait to get my hands on a Lumia 920.  del.icio.us Tags: Windows Phone,Windows Phone 8,Nokia,Lumia,Lumia 920,Microsoft

    Read the article

  • Get Started using Build-Deploy-Test Workflow with TFS 2012

    - by Jakob Ehn
    TFS 2012 introduces a new type of Lab environment called Standard Environment. This allows you to setup a full Build Deploy Test (BDT) workflow that will build your application, deploy it to your target machine(s) and then run a set of tests on that server to verify the deployment. In TFS 2010, you had to use System Center Virtual Machine Manager and involve half of your IT department to get going. Now all you need is a server (virtual or physical) where you want to deploy and test your application. You don’t even have to install a test agent on the machine, TFS 2012 will do this for you! Although each step is rather simple, the entire process of setting it up consists of a bunch of steps. So I thought that it could be useful to run through a typical setup.I will also link to some good guidance from MSDN on each topic. High Level Steps Install and configure Visual Studio 2012 Test Controller on Target Server Create Standard Environment Create Test Plan with Test Case Run Test Case Create Coded UI Test from Test Case Associate Coded UI Test with Test Case Create Build Definition using LabDefaultTemplate 1. Install and Configure Visual Studio 2012 Test Controller on Target Server First of all, note that you do not have to have the Test Controller running on the target server. It can be running on another server, as long as the Test Agent can communicate with the test controller and the test controller can communicate with the TFS server. If you have several machines in your environment (web server, database server etc..), the test controller can be installed either on one of those machines or on a dedicated machine. To install the test controller, simply mount the Visual Studio Agents media on the server and browse to the vstf_controller.exe file located in the TestController folder. Run through the installation, you might need to reboot the server since it installs .NET 4.5. When the test controller is installed, the Test Controller configuration tool will launch automatically (if it doesn’t, you can start it from the Start menu). Here you will supply the credentials of the account running the test controller service. Note that this account will be given the necessary permissions in TFS during the configuration. Make sure that you have entered a valid account by pressing the Test link. Also, you have to register the test controller with the TFS collection where your test plan is located (and usually the code base of course) When you press Apply Settings, all the configuration will be done. You might get some warnings at the end, that might or might not cause a problem later. Be sure to read them carefully.   For more information about configuring your test controllers, see Setting Up Test Controllers and Test Agents to Manage Tests with Visual Studio 2. Create Standard Environment Now you need to create a Lab environment in Microsoft Test Manager. Since we are using an existing physical or virtual machine we will create a Standard Environment. Open MTM and go to Lab Center. Click New to create a new environment Enter a name for the environment. Since this environment will only contain one machine, we will use the machine name for the environment (TargetServer in this case) On the next page, click Add to add a machine to the environment. Enter the name of the machine (TargetServer.Domain.Com), and give it the Web Server role. The name must be reachable both from your machine during configuration and from the TFS app tier server. You also need to supply an account that is a local administration on the target server. This is needed in order to automatically install a test agent later on the machine. On the next page, you can add tags to the machine. This is not needed in this scenario so go to the next page. Here you will specify which test controller to use and that you want to run UI tests on this environment. This will in result in a Test Agent being automatically installed and configured on the target server. The name of the machine where you installed the test controller should be available on the drop down list (TargetServer in this sample). If you can’t see it, you might have selected a different TFS project collection. Press Next twice and then Verify to verify all the settings: Press finish. This will now create and prepare the environment, which means that it will remote install a test agent on the machine. As part of this installation, the remote server will be restarted. 3-5. Create Test Plan, Run Test Case, Create Coded UI Test I will not cover step 3-5 here, there are plenty of information on how you create test plans and test cases and automate them using Coded UI Tests. In this example I have a test plan called My Application and it contains among other things a test suite called Automated Tests where I plan to put test cases that should be automated and executed as part of the BDT workflow. For more information about Coded UI Tests, see Verifying Code by Using Coded User Interface Tests   6. Associate Coded UI Test with Test Case OK, so now we want to automate our Coded UI Test and have it run as part of the BDT workflow. You might think that you coded UI test already is automated, but the meaning of the term here is that you link your coded UI Test to an existing Test Case, thereby making the Test Case automated. And the test case should be part of the test suite that we will run during the BDT. Open the solution that contains the coded UI test method. Open the Test Case work item that you want to automate. Go to the Associated Automation tab and click on the “…” button. Select the coded UI test that you corresponds to the test case: Press OK and the save the test case For more information about associating an automated test case with a test case, see How to: Associate an Automated Test with a Test Case 7. Create Build Definition using LabDefaultTemplate Now we are ready to create a build definition that will implement the full BDT workflow. For this purpose we will use the LabDefaultTemplate.11.xaml that comes out of the box in TFS 2012. This build process template lets you take the output of another build and deploy it to each target machine. Since the deployment process will be running on the target server, you will have less problem with permissions and firewalls than if you were to remote deploy your solution. So, before creating a BDT workflow build definition, make sure that you have an existing build definition that produces a release build of your application. Go to the Builds hub in Team Explorer and select New Build Definition Give the build definition a meaningful name, here I called it MyApplication.Deploy Set the trigger to Manual Define a workspace for the build definition. Note that a BDT build doesn’t really need a workspace, since all it does is to launch another build definition and deploy the output of that build. But TFS doesn’t allow you to save a build definition without adding at least one mapping. On Build Defaults, select the build controller. Since this build actually won’t produce any output, you can select the “This build does not copy output files to a drop folder” option. On the process tab, select the LabDefaultTemplate.11.xaml. This is usually located at $/TeamProject/BuildProcessTemplates/LabDefaultTemplate.11.xaml. To configure it, press the … button on the Lab Process Settings property First, select the environment that you created before: Select which build that you want to deploy and test. The “Select an existing build” option is very useful when developing the BDT workflow, because you do not have to run through the target build every time, instead it will basically just run through the deployment and test steps which speeds up the process. Here I have selected to queue a new build of the MyApplication.Test build definition On the deploy tab, you need to specify how the application should be installed on the target server. You can supply a list of deployment scripts with arguments that will be executed on the target server. In this example I execute the generated web deploy command file to deploy the solution. If you for example have databases you can use sqlpackage.exe to deploy the database. If you are producing MSI installers in your build, you can run them using msiexec.exe and so on. A good practice is to create a batch file that contain the entire deployment that you can run both locally and on the target server. Then you would just execute the deployment batch file here in one single step. The workflow defines some variables that are useful when running the deployments. These variables are: $(BuildLocation) The full path to where your build files are located $(InternalComputerName_<VM Name>) The computer name for a virtual machine in a SCVMM environment $(ComputerName_<VM Name>) The fully qualified domain name of the virtual machine As you can see, I specify the path to the myapplication.deploy.cmd file using the $(BuildLocation) variable, which is the drop folder of the MyApplication.Test build. Note: The test agent account must have read permission in this drop location. You can find more information here on Building your Deployment Scripts On the last tab, we specify which tests to run after deployment. Here I select the test plan and the Automated Tests test suite that we saw before: Note that I also selected the automated test settings (called TargetServer in this case) that I have defined for my test plan. In here I define what data that should be collected as part of the test run. For more information about test settings, see Specifying Test Settings for Microsoft Test Manager Tests We are done! Queue your BDT build and wait for it to finish. If the build succeeds, your build summary should look something like this:

    Read the article

  • Help file formats - MSHA files v CHM files

    - by TATWORTH
    Recently I was tasked with producing a help file from a C#/WPF/Crystal Reports application using Sandcastle. I have previously blogged about the problems in doing that and the change that is going into the next version of Sandcastle that allows the vagaries of Crystal (this missing BusinessObjects.Licensing.KeycodeDecoder) to be handled. At http://social.msdn.microsoft.com/Forums/en-US/devdocs/thread/0b110502-f5bb-4c56-96a5-4347a2a7a68a/, I describe how I tried each of the formats. Two of the formats could not be built and the error messages were not exactly helpful as to the cause. These two formats turned out to be obsolete. The MSHA format worked but was not suitable for a standalone application, so that left me with the older CHM format. I therefore asked on that thread "will the HTML Help 1 (CHM) format continue to be supported for the foreseeable future?".Rob Chandler, MVP in help systems, gave a very helpful answer, to the effect that there is not yet a replacement for the CHM format.

    Read the article

  • TFS 2012 Upgrade and SQL Server - SharePoint - OS Requirements.

    - by Vishal
    Hello folks,Recently I was involved in Installation and Configuration of Team Foundation Server 2010 Farm for a client. A month after the installation and configuration was done and everything was working as it was supposed to, Microsoft released Team Foundation Server 2012 in mid August 2012. Well the company was using Borland Starteam as their source control and once starting to use TFS 2010, their developers and project managers were loving it since TFS is not just a source control tool and way much better then StarTeam. Anyways, long story short, they are now interested in thinking of upgrading to the newest version. Below are some basic Hardware and Software requirements for TFS 2012:Operating System:Windows Server 2008 with SP2 (only 64bit)Windows Server 2008 R2 with SP1 (only 64bit)Windows Server 2012 (only 64bit)SQL Server:SQL Server 2008 R2 and SQL Server 2012SQL Server 2008 is no longer supported.SQL Server Requirements for TFS.SharePoint Products:SharePoint Server 2010. (SharePoint Foundation 2010, Standard, Enterprise).MOSS 2007 (Standard, Enterprise)Windows SharePoint Services 3.0 (WSS 3.0)SharePoint Products Requirements for TFS.Project Server:Project Server 2010 with SP1.Project Server 2007 with SP2.Project Server Requirements for TFS.More information onf TFS Upgrade Requirements can be found here. Hardware Recommendations can be found here.Thanks,Vishal Mody

    Read the article

  • O'Reilly Deal of the day - 5/JSep/2012 - Programming Windows 8 Applications with C#

    - by TATWORTH
    Today's deal of the day from O'Reilly at http://shop.oreilly.com/product/0636920024200.do?code=DEAL is Programming Windows 8 Applications with C# ."With Early Release e-books, you get books in their earliest form — the author's raw and unedited content as he or she writes — so you can take advantage of these technologies long before the official release of these titles. You'll also receive updates when significant changes are made, new chapters as they're written, and the final e-book bundle. If you want to build Windows 8 applications for desktops and the forthcoming Microsoft Surface tablet PC, this book will show you how to work with the Metro design language and the Windows RT operating system. You’ll learn this new landscape step-by-step, including the minute system details and design specifications necessary to innovate and build a variety of Windows 8 apps. It’s ideal for .NET developers who use C#."

    Read the article

  • Get to Know a Candidate (1 of 25): Tom Hoefling&ndash;America&rsquo;s Party

    - by Brian Lanham
    DISCLAIMER: This is not a post about “Romney” or “Obama”. This is not a post for whom I am voting.  Information sourced for Wikipedia. If you recall, on Sunday I blogged about not voting against a candidate and, instead, voting for a candidate.  As promised, this is the first post of 25. Meet Tom Hoefling of America’s Party In addition to being America’s Party nominee, he’s also the national chairman of the party.  Mr. Hoefling also served as the political director for Alan Keyes’ political group America’s Revival.  He is a representative for the American Conservative Coalition.  Mr. Hoefling is on the ballot in:  CA, CO, FL and is a qualified write-in candidate in IN. America’s Party This party was originally known as “America’s Independent Party” and considers itself conservative. The following describes their standing on specific issues. * Tax Reform – The party seeks to reform the tax structure by advocating the repeal of the 16th Amendment, and despite the fact that many members support the FairTax, the platform remains open on what to replace the Federal income tax with. * Other - The party supports the Federal Marriage Amendment being added to the U.S. Constitution. It is also pro-life on abortion. Learn more about Tom Hoefling and America’s Party on Wikipedia.

    Read the article

  • Working with documents and SharePoint - Best practices

    - by KunaalKapoor
    Follow these simple guidelines to make collaboration using SharePoint easier:1. File Name:While it is allowed to use spaces in your filename (and maybe it seems even logical to do so), don’t use them if your file will end up (or is born on) SharePoint. When you use the “download a copy” functionality, SharePoint will replace the spaces with an “_”. This might (will) result in inconsistency when you upload the “same” file again, since SharePoint will see this as a different file (since the filename is different). I recommend using a filename with Capitalization style naming guideline. For instance: the document “Overall governance model.docx” would be named “OverallGovernanceModel.docx”Use the TITLE field in the office applications to give your document a title (and subtitle and keywords, .) The title column can be used in a view in a library. You can get to the document properties by clicking on 'Office Button/Prepare/Properties'. (Office 2007). This is metadata that is stored with the document, and will remain in the document (even if you exchange this document via e-mail, via an external hard drive). The filename cannot be longer than 128 characters. (and that is IMHO far beyond reasonable) You cannot use any of these characters: ” # % & * : < > ? \ / { | } ~ 2. Versioning:SharePoint has a built-in versioning system. You can work with major (published) versions, and minor (draft) versions. Of each of these two document types, you can store a numbers of versions that are kept. Watch out, each version is saved, not only the delta between 2 versions, and this counts to your Site Collection Quota. (Example: you have a Word document with a size of 2 MB. When you keep 5 Drafts this will result in storing (and consuming) 10 MB.So, don’t call your document “NewUserAccountProcessDRAFTv1.docx”, but “NewUserAccountProcess.docx” and use versioning setting in your library.You can enable views on your library to display the version number.You can enable the version number to be displayed in a Word document.3. Use MetadataUse metadata to assign other properties to documents, so it can be easily identified, sorted- or grouped by.

    Read the article

  • Lessons learnt in implementing Scrum in a Large Organization that has traditional values

    - by MarkPearl
    I recently had the experience of being involved in a “test” scrum implementation in a large organization that was used to a traditional project management approach. Here are some lessons that I learnt from it. Don’t let the Project Manager be the Product Owner First lesson learnt is to identify the correct product owner – in this instance the product manager assumed the role of the product owner which was a mistake. The product owner is the one who has the most to loose if the project fails. With a methodology that advocates removing the role of the project manager from the process then it is not in the interests of the person who is employed as a project manager to be the product owner – in fact they have the most to gain should the project fail. Know the time commitments of team members to the Project Second lesson learnt is to get a firm time commitment of the members on a team for the sprint and to hold them to it. In this project instance many of the issues we faced were with team members having to double up on supporting existing projects/systems and the scrum project. In many situations they just didn’t get round to doing any work on the scrum project for several days while they tried to meet other commitments. Initially this was not made transparent to the team – in stand up team members would say that had done some work but would be very vague on how much time they had actually spent using the blackhole of their other legacy projects as an excuse – putting up a time burn down chart made time allocations transparent and easy to hold the team to. In addition, how can you plan for a sprint without knowing the actual time available of the members – when I mean actual time, the exercise of getting them to go through all their appointments and lunch times and breaks and removing them from their time commitment helps get you to a realistic time that they can dedicate. Make sure you meet your minimum team sizes In a recent post I wrote about the difference between a partnership and a team. If you are going to do scrum in a large organization make sure you have a minimum team size of at least 3 developers. My experience with larger organizations is that people have a tendency to be sick more, take more leave and generally not be around – if you have a team size of two it is so easy to loose momentum on the project – the more people you have in the team (up to about 9) the more the momentum the project will have when people are not around. Swapping from one methodology to another can seem as waste to the customer It sounds bad, but most customers don’t care what methodology you use. Often they have bought into the “big plan upfront”. If you can, avoid taking a project on midstream from a traditional approach unless the customer has not bought into the process – with this particular project they had a detailed upfront planning breakaway with the customer using the traditional approach and then before the project started we moved onto a scrum implementation – this seemed as waste to the customer. We should have managed the customers expectation properly. Don’t play the role of the scrum master if you can’t be the scrum master With this particular implementation I was the “scrum master”. But all I did was go through the process of the formal meetings of scrum – I attended stand up, retrospectives and planning – but I was not hands on the ground. I was not performing the most important role of removing blockages – and by the end of the project there were a number of blockages “cropping up”. What could have been a better approach was to take someone on the team and train them to be the scrum master and be present to coach them. Alternatively actually be on the team on a fulltime basis and be the scrum master. By just going through the meetings of scrum didn’t mean we were doing scrum. So we failed with this one, if you fail look at it from an agile perspective As this particular project drew to a close and it became more and more apparent that it was not going to succeed the failure of it became depressing. Emotions were expressed by various people on the team that we not encouraging and enforced the failure. Embracing the failure and looking at it for what it is instead of taking it as the end of the world can change how you grow from the experience. Acknowledging that it failed and then focussing on learning from why and how to avoid the failure in the future can change how you feel emotionally about the team, the project and the organization.

    Read the article

  • CSS hack for Google Chrome and Safari

    - by Renso
    When wanting to hack css in an external stylesheet just for Google Chrome and Safari. Here is an example of where I override the margin-top for Chrome and Safari.Normal:#AccountMaintenanceWrapper #callDetailsPreviewWrapper{    border: none;    padding: 0px;    width: 209px;    position: fixed;    margin-top: 84px;    z-index: 1;}Google Chrome and Safari:@media screen and (-webkit-min-device-pixel-ratio:0){    #AccountMaintenanceWrapper #callDetailsPreviewWrapper    {        margin-top: 12px;    }}

    Read the article

  • Why CoffeeScript is an issue

    - by Renso
    Other than some obvious concerns, my main concern is support in the open source community. "anon" from the CoffeeScript team sent this to me after I requested input from the team to concerns I raised and wanted to get others' take on it:"Thanks for confirming that only idiots willingly program in Java and C#"or the following from the same person:"Oh and finally, you should definitely create jShort. Even though I know you will fail before you even start, I would love to laugh at your attempts and it would be perfect for you since you ride the short bus. "This kind of comment reflects badly on the CoffeeScript team and hence not an option for us as a company to consider. Another example of why some open-source community projects get no traction.

    Read the article

  • Microsoft Press Deal of the day 4/Sep/2012 - Programming Microsoft® SQL Server® 2012

    - by TATWORTH
    Today's deal of the day from Microsoft Press at http://shop.oreilly.com/product/0790145322357.do?code=MSDEAL is Programming Microsoft® SQL Server® 2012 "Your essential guide to key programming features in Microsoft® SQL Server® 2012 Take your database programming skills to a new level—and build customized applications using the developer tools introduced with SQL Server 2012. This hands-on reference shows you how to design, test, and deploy SQL Server databases through tutorials, practical examples, and code samples. If you’re an experienced SQL Server developer, this book is a must-read for learning how to design and build effective SQL Server 2012 applications."

    Read the article

  • What is testable code?

    - by Michael Freidgeim
    We are improving quality of code and trying to develop more unit tests. The question that developers asked  was  "How to make code testable ?"  From http://openmymind.net/2010/8/17/Write-testable-code-even-if-you-dont-write-tests/ First and foremost, its loosely coupled, taking advantage of dependency injection (and auto-wiring), composition and interface-programming. Testable code is also readable - meaning it leverages single responsibility principle and Liskov substitution principle.A few practical suggestions are listed in http://misko.hevery.com/code-reviewers-guide/More recommendations are in http://googletesting.blogspot.com/2008/08/by-miko-hevery-so-you-decided-to.htmlIt is slightly too theoretical - " the trick is translating these abstract concepts into concrete decisions in your code."

    Read the article

  • O'Reilly deal of the week to 23:59 PT Sept 11 - Back-to-School Special

    - by TATWORTH
    At http://shop.oreilly.com/category/deals/b2s-2012-special.do, O'Reilly are offering up to 50% off a range of E-books, together with reductions on other items."Get definitive information on technology for developers, designers, admins – whatever you are or want to be. With our Back-to-School Special, you choose what to learn and we give you the tools to make it happen. Save 50% on eBooks and videos, 40% on print books from O'Reilly, Microsoft Press, SitePoint, and No Starch, or 30% on courses from O'Reilly School of Technology." There are some 37 books and e-books on offer together with 3 videos.

    Read the article

  • redhat Apache fast-cgi selinux permissions

    - by Alejo JM
    My apache installation is running php as fastcgi, and the virtual hosts are pointing to /home/*/public_html. and the fastcgi are home/*/cgi-bin/php.fcgi the public_html setup with selinux was: /usr/sbin/setsebool -P httpd_enable_homedirs 1 chcon -R -t httpd_sys_content_t /home/someuser/public_html The owner and group are the user, for example the user "someuser": ls -all /home/someuser/cgi-bin/ drwxr-xr-x 2 someuser someuser 4096 Sep 7 13:14 . drwx--x--x 6 someuser someuser 4096 Sep 6 18:17 .. -rwxr-xr-x 1 someuser someuser 308 Sep 7 13:14 php.fcgi ls -all /home/someuser/public_html/ | greep info.php -rw-r--r-- 1 someuser someuser 24 Sep 3 16:24 info.php When is visits the site I get "Forbidden ..." and the log said: [Fri Sep 07 12:02:51 2012] [error] [client x.x.x.x] (13)Permission denied: access to /cgi-bin/php.fcgi/info.php denied My selinux conf is: SELINUX=enforcing SELINUXTYPE=targeted SETLOCALDEFS=0 So I kill Selinux (SELINUX=disabled), reboot the system and everything works !!!!! The problem is Selinux, I don't want disable Selinux. I trying this with no success: setsebool -P httpd_enable_cgi 1 chcon -t httpd_sys_script_exec_t /home/someuser/cgi-bin/php.fcgi chcon -R -t httpd_sys_content_t /home/someuser/cgi-bin Or maybe is better change Selinux SELINUX=enforcing to SELINUX=permissive And disable selinux for httpd ? (I think I better find the correct configuration) Thanks for any suggestion on this matter My environment: Red Hat Enterprise Linux Server release 5.8 (Tikanga) Server version: Apache/2.2.3 PHP 5.1.6 (cli) (built: Jun 22 2012 06:20:25) Copyright (c) 1997-2006 The PHP Group Zend Engine v2.1.0, Copyright (c) 1998-2006 Zend Technologies Some logs: ps -ZC httpd LABEL PID TTY TIME CMD system_u:system_r:httpd_t 2822 ? 00:00:00 httpd system_u:system_r:httpd_t 2823 ? 00:00:00 httpd system_u:system_r:httpd_t 2824 ? 00:00:00 httpd system_u:system_r:httpd_t 2825 ? 00:00:00 httpd system_u:system_r:httpd_t 2826 ? 00:00:00 httpd system_u:system_r:httpd_t 2836 ? 00:00:00 httpd system_u:system_r:httpd_t 2837 ? 00:00:00 httpd system_u:system_r:httpd_t 2838 ? 00:00:00 httpd system_u:system_r:httpd_t 2839 ? 00:00:00 httpd system_u:system_r:httpd_t 2840 ? 00:00:00 httpd getsebool -a | grep httpd allow_httpd_anon_write --> off allow_httpd_bugzilla_script_anon_write --> off allow_httpd_cvs_script_anon_write --> off allow_httpd_mod_auth_pam --> off allow_httpd_nagios_script_anon_write --> off allow_httpd_prewikka_script_anon_write --> off allow_httpd_squid_script_anon_write --> off allow_httpd_sys_script_anon_write --> off httpd_builtin_scripting --> on httpd_can_network_connect --> off httpd_can_network_connect_db --> off httpd_can_network_relay --> off httpd_can_sendmail --> on httpd_disable_trans --> off httpd_enable_cgi --> on httpd_enable_ftp_server --> off httpd_enable_homedirs --> on httpd_execmem --> off httpd_read_user_content --> off httpd_rotatelogs_disable_trans --> off httpd_setrlimit --> off httpd_ssi_exec --> off httpd_suexec_disable_trans --> off httpd_tty_comm --> on httpd_unified --> on httpd_use_cifs --> off httpd_use_nfs --> off

    Read the article

  • what's wrong with my Ubuntu 11.10 bind9 configuration?

    - by John Bowlinger
    I've followed several tutorials on installing your own nameservers and I'm pretty much at my wit's end, because I cannot get them to resolve. Note, the actual domain and ip address has been changed for privacy to example.com and 192.168.0.1. My named.conf.local file: zone "example.com" { type master; file "/var/cache/bind/example.com.db"; }; zone "0.168.192.in_addr.arpa" { type master; file "/var/cache/bind/192.168.0.db"; }; My named.conf.options file: options { forwarders { 192.168.0.1; }; auth-nxdomain no; # conform to RFC1035 listen-on-v6 { any; }; }; My resolv.conf file: search example.com. nameserver 192.168.0.1 My Forward DNS file: ORIGIN example.com. $TTL 86400 @ IN SOA ns1.example.com. root.example.com. ( 2012083101 ; Serial 604800 ; Refresh 86400 ; Retry 2419200 ; Expire 3600 ) ; Negative Cache TTL example.com. NS ns1.example.com. example.com. NS ns2.example.com. example.com. MX 10 mail.example.com. @ IN A 192.168.0.1 ns1.example.com IN A 192.168.0.1 ns2.example.com IN A 192.168.0.2 mail IN A 192.168.0.1 server1 IN A 192.168.0.1 gateway IN CNAME ns1.example.com. headoffice IN CNAME server1.example.com. smtp IN CNAME mail.example.com. pop IN CNAME mail.example.com. imap IN CNAME mail.example.com. www IN CNAME server1.example.com. sql IN CNAME server1.example.com. And my reverse DNS: $ORIGIN 0.168.192.in-addr.arpa. $TTL 86400 @ IN SOA ns1.example.com. root.example.com. ( 2009013101 ; Serial 604800 ; Refresh 86400 ; Retry 2419200 ; Expire 3600 ) ; Negative Cache TTL 1 PTR mail.example.com. 1 PTR server1.example.com. 2 PTR ns1.example.com. Yet, when I restart bind9 and do: host ns1.example.com localhost I get: Using domain server: Name: localhost Address: 127.0.0.1#53 Aliases: Host ns1.example.com.example.com not found: 2(SERVFAIL) Similarly, for: host 192.168.0.1 localhost I get: ;; connection timed out; no servers could be reached Anybody know what's going on? Btw, my domain name "www.example.com" that I've used in this question is being forwarded to my ISP's nameservers. Would that affect my bind9 configuration? I want to learn how to do set up nameservers on my own for learning, so that is why I'm going through all this trouble.

    Read the article

  • Load balancing with Cisco router

    - by you8301083
    I have a Cisco router with two bonded T1's which are setup as a VPN to the main office. We need more bandwidth but can't get other connections (or it's too costly), so I would like to have a dsl connection installed. This DSL connection will run over a VPN to the same main office, but it won't be bonded with the T1's - so it won't act as a single connection. Since the three circuits won't act as a single connection (basically would be two connections 2 T1's + 1 DSL) we would have to split the network in half - but I don't want to do that. Instead, would it be possible to send all HTTP/HTTPS over the DSL connection but send all mission critical data (such as voice/active directory) over the T1's? I basically want to send specific ports over DSL and everything else over the T1's without separating half of the users traffic over the DSL and the rest over the T1's.

    Read the article

  • Create and use intermediate certificate authority on Windows Server 2012?

    - by Sid
    Background: Server OS is Windows Server 2012. GUI is installed as we come upto speed with powershell. Setup is staging, not production (yet). We have our (internal, domain limited) Root CA installed. I would like to take the Root CA offline to secure storage but before that I'd like to setup an intermediate CA which can take over actual live, online (int-RA-net) functionality Can someone guide me covering: creating the intermediate CA certificate request installing the intermediate CA certificate on domain controller (certification authority role already installed with Root CA online right now) use the intermediate CA to generate a certificate (any use certificate, just for demonstration purposes) Obviously this certification chain would be invalid on computers outside our domain (self trusted root - our root certificate is NOT from common 3rd parties). This last point is NOT a problem.

    Read the article

  • Do large corporations block jQuery content on web pages?

    - by Max Vernon
    We are currently redesigning our website. The company we've hired to do the redesign is advocating the use of jQuery to render the pages dynamically. Our SEO specialist is under the impression that many larger corporations may have jQuery blocked in their proxies to prevent their users from visiting sites like Facebook. Is this something you are aware of? Forgive me if this is off topic for SF.SE!

    Read the article

  • Excel IP address and subnet to network and inverse mask [closed]

    - by Steve Dailey
    We need a script, marco or something in excel where we can take list like below interface Vlan100 ip address 192.168.1.3 255.255.255.0 interface Vlan101 ip address 192.168.2.3 255.255.255.128 interface Vlan102 ip address 192.168.2.130 255.255.255.128 interface Vlan103 ip address 192.168.3.3 255.255.255.240 etc... and produce a list like below ospf 1 undo silent-interface Vlan-interface100 undo silent-interface Vlan-interface101 undo silent-interface Vlan-interface102 undo silent-interface Vlan-interface103 area 0.0.0.0 network 192.168.1.0 0.0.0.255 network 192.168.2.0 0.0.0.127 network 192.168.2.128 0.0.0.127 network 192.168.3.0 0.0.0.15 so it will need to take an ip address/subnet mask and convert them to network number/inverse mask. I believe I can handle the Vlan manipulation with a substitution so no need to spend time on that.

    Read the article

  • Command line import of database using latin1 encoding

    - by chrisjlee
    I'm using a particular cloud hosting solution (one which i won't name) and they don't provide ssh access so i'm at a whim on how the database is dumped. I downloaded the dump which is packed into a tar.gz file. I discover that this file utilizes latin1 encoding. Which i don't get to specify the encoding for the host i'm using because i don't have SSH access or DB access. I try to import it via command line for my local development environment (mysql -uroot foodb < file.db) like i usually do with other databases but am having problems. Is it possible to import a database via command line by specifying which encoding (preferably latin1) before importing it? Or do i have to convert it to UTF8?

    Read the article

  • .htaccess deny from all does not work?

    - by jeffery_the_wind
    I am running Apache 2.2.20 on a Ubuntu 11.04 web server. I have a Joomla site running on it, but I have also added some custom content. In the main web directly I have added a folder /images/sub_folder and in this sub_folder I have put a bunch of pictures. I do not want anyone to be able to simply access these pictures directly from the web, so I made a .htaccess file in that sub_folder and just put the following line in it: deny from all There doesn't seem to be any effect, I can still access the images directly from a web browser. I have restarted the Apache service. What am I doing wrong? Thanks Tim

    Read the article

< Previous Page | 8 9 10 11 12 13 14 15 16 17  | Next Page >