Search Results

Search found 4879 results on 196 pages for 'geeks'.

Page 104/196 | < Previous Page | 100 101 102 103 104 105 106 107 108 109 110 111  | Next Page >

  • Part 1 Basic Webtrends REST Examples

    - by GeekAgilistMercenary
    In this entry I just want to cover some examples of how to connect to Webtrends DX Web Services.  The DX Web Services use REST as the architecture, providing simple URI based end points to connect to.  With the Webtrends SDK you can connect to these services with your account information.  Here are the basic steps to retrieve a profile list, the reports from one of those profiles, and then the report you want from that report list. First step is to create a Webtrends User. WebTrends.Sdk.Account.User webtrendsUser = new Account.User(); webtrendsUser.UserName = username; webtrendsUser.Password = password; webtrendsUser.AccountName = account; After you create the Webtrends User, simple request a profile list by getting list of ProfileDefinition Objects. List<WebTrends.Sdk.Profile.ProfileDefinition> profiles = WebTrends.Sdk.Factory.NavigationFactory.BuildListing(webtrendsUser); Next you will want to grab a report based on the profile you are in and your credentials. List<WebTrends.Sdk.Report.ReportDefinition> reports = WebTrends.Sdk.Factory.NavigationFactory.BuildListing(profiles[i], webtrendsUser); In the code above, i would equate to the specific profile you want from the retrieved list of profiles in the profiles list.  The common scenario is that one has pulled the profiles into a drop down, combo, or list box that the user can select.  Then when the user selects the specific profile that profile object can then be used to pull the List of ReportDefinitions. Once we have the report definitions, all sorts of criteria can be added together to query for a specific report.  This is also were things can get a little tricky.  For instance, take a look at the code below. WebTrends.Sdk.Factory.ReportFactory.CreateDimensionalReport( report.ID.ToString(), profiles[i].ID.ToString(), "2010m01", webtrendsUser); The CreateDimensionalReport takes 4 parameters for this particular overload.  The report ID, profile ID, the Webtrends Date Format, and the Webtrends User Object.  There are a number of other overloads available within this factory's method that allow for passing the specific REST URI, and other criteria to retrieve the report of your choice.  In the near future we will be adding some more to this method also, which will provide more flexibility without needing to use the full REST URI. I will have more on this, so all you Coders out there using Webtrends DX Services, I hope this is helpful!  Enjoy. Original Entry

    Read the article

  • KeyRef &ndash; A Keyboard Shortcut Reference Site

    - by Liam McLennan
    The mouse is like computer training wheels. It makes using a computer easier – but it slows you down. Like many of my peers I am making a effort to learn keyboard shortcuts to reduce my dependence on the mouse. So I have started accumulating browser bookmarks to websites listing keyboard shortcuts for vim and resharper etc. Based on the assumption that I am not the only person who finds this untenable I am considering building the ultimate keyboard shortcut reference site. This is an opportunity for me to improve my rails skills and hopefully contribute something useful to the anti-mouse community. Mockups Shortcuts will be grouped by application, so the first thing a user needs to do is find their application. They do this by typing the application name into a textbox and then selecting from a reducing list of applications. This interface will work like the stackoverflow tags page. Selecting an application will take the user to a page that lists the shortcuts for that application. This page will have a permalink for bookmarking. Shortcuts can be searched by keyword or by using the shortcut.

    Read the article

  • Visual Studio 2010 Keyboard Shortcut Posters Available

    - by Jim Duffy
    I’m a firm believer in the productivity gains you experience when using keyboard shortcuts in Visual Studio. If you’re not using keyboard shortcuts while coding then your productivity is suffering. Some of my favorites (omitting the obvious ones like F5 to start debugging) as are: Ctrl+K, C – Comment section of code Ctrl+K, U – Uncomment section of code Ctrl+K, D – Format the current document (indentation, etc.) Shift+Alt+C – Add new class to a project Shift+Alt+A – Add existing item to a project Ctrl+Shift+A – Add new item to a project The good news is all of these and a TON of others are all documented in the Visual Studio Keyboard Shortcut Posters (available as PDFs). The only problem is there are so many you need a printer capable of printing on larger paper because while you can read them all on 8 1/2 x 11 paper in landscape mode, for them to be a valuable quick reference on your cubicle wall you’re going to need to print them on large paper. If you don’t have a printer capable of producing large sized printouts head down to Office Depot, Staples, FedEx Office, or your favorite print shop and have them print one for you. Oh and one last thing, I’d really like Microsoft to take those people’s picture off them. Really? Do we need to look at these people when trying to improve our productivity? Have a day. :-|

    Read the article

  • WPF Databinding- Part 2 of 3

    - by Shervin Shakibi
    This is a follow up to my previous post WPF Databinding- Not your fathers databinding Part 1-3 you can download the source code here  http://ssccinc.com/wpfdatabinding.zip Example 04   In this example we demonstrate  the use of default properties and also binding to an instant of an object which is part of a collection bound to its container. this is actually not as complicated as it sounds. First of all, lets take a look at our Employee class notice we have overridden the ToString method, which will return employees First name , last name and employee number in parentheses, public override string ToString()        {            return String.Format("{0} {1} ({2})", FirstName, LastName, EmployeeNumber);        }   in our XAML we have set the itemsource of the list box to just  “Binding” and the Grid that contains it, has its DataContext set to a collection of our Employee objects. DataContext="{StaticResource myEmployeeList}"> ….. <ListBox Name="employeeListBox"  ItemsSource="{Binding }" Grid.Row="0" /> the ToString in the method for each instance will get executed and the following is a result of it. if we did not have a ToString the list box would look  like this: now lets take a look at the grid that will display the details when someone clicks on an Item, the Grid has the following DataContext DataContext="{Binding ElementName=employeeListBox,            Path=SelectedItem}"> Which means its bound to a specific instance of the Employee object. and within the gird we have textboxes that are bound to different Properties of our class. <TextBox Grid.Row="0" Grid.Column="1" Text="{Binding Path=FirstName}" /> <TextBox Grid.Row="1" Grid.Column="1" Text="{Binding Path=LastName}" /> <TextBox Grid.Row="2" Grid.Column="1" Text="{Binding Path=Title}" /> <TextBox Grid.Row="3" Grid.Column="1" Text="{Binding Path=Department}" />   Example 05   This project demonstrates use of the ObservableCollection and INotifyPropertyChanged interface. Lets take a look at Employee.cs first, notice it implements the INotifyPropertyChanged interface now scroll down and notice for each setter there is a call to the OnPropertyChanged method, which basically will will fire up the event notifying to the value of that specific property has been changed. Next EmployeeList.cs notice it is an ObservableCollection . Go ahead and set the start up project to example 05 and then run. Click on Add a new employee and the new employee should appear in the list box.   Example 06   This is a great example of IValueConverter its actuall a two for one deal, like most of my presentation demos I found this by “Binging” ( formerly known as g---ing) unfortunately now I can’t find the original author to give him  the credit he/she deserves. Before we look at the code lets run the app and look at the finished product, put in 0 in Celsius  and you should see Fahrenheit textbox displaying to 32 degrees, I know this is calculating correctly from my elementary school science class , also note the color changed to blue, now put in 100 in Celsius which should give us 212 Fahrenheit but now the color is red indicating it is hot, and finally put in 75 Fahrenheit and you should see 23.88 for Celsius and the color now should be black. Basically IValueConverter allows us different types to be bound, I’m sure you have had problems in the past trying to bind to Date values . First look at FahrenheitToCelciusConverter.cs first notice it implements IValueConverter. IValueConverter has two methods Convert and ConvertBack. In each method we have the code for converting Fahrenheit to Celsius and vice Versa. In our XAML, after we set a reference in our Windows.Resources section. and for txtCelsius we set the path to TxtFahrenheit and the converter to an instance our FahrenheitToCelciusConverter converter. no need to repeat this for TxtFahrenheit since we have a convert and ConvertBack. Text="{Binding  UpdateSourceTrigger=PropertyChanged,            Path=Text,ElementName=txtFahrenheit,            Converter={StaticResource myTemperatureConverter}}" As mentioned earlier this is a twofer Demo, in the second demo, we basically are converting a double datatype to a brush. Lets take a look at TemperatureToColorConverter, notice we in our Covert Method, if the value is less than our cold temperature threshold we return a blue brush and if it is higher than our hot temperature threshold we return a redbrush. since we don’t have to convert a brush to double value in our example the convert back is not being implemented. Take time and go through these three examples and I hope you have a better understanding   of databinding, ObservableCollection  and IValueConverter . Next blog posting we will talk about ValidationRule, DataTemplates and DataTemplate triggers.

    Read the article

  • Managed Service Architectures Part I

    - by barryoreilly
    Instead of thinking about service oriented architecture, a concept that is continually defined, redefined, abused and mistreated, perhaps it is time to drop the acronym and consider what we actually need to get the job done.   ‘Pure’ SOA involves the modeling of an organisation’s processes, the so called ‘Top Down’ approach, followed by the implementation of these processes as services.     Another approach, more commonly seen in the wild, is the bottom up approach. This usually involves services that simply start popping up in the organization, and SOA in this case is often just an attempt to rein in these services. Such projects, although described as SOA projects for a variety of reasons, have clearly little relation to process driven architecture. Much has been written about these two approaches, with many deciding that a hybrid of both methods is needed to succeed with SOA.   These hybrid methods are a sensible compromise, but one gets the feeling that there is too much focus on ‘Succeeding with SOA’. Organisations who focus too much on bottom up development, or who waste too much time and money on top down approaches that don’t produce results, are often recommended to attempt an ‘agile’(Erl) or ‘middle-out’ (Microsoft) approach in order to succeed with SOA.  The problem with recommending this approach is that, in most cases, succeeding with SOA isn’t the aim of the project. If a project is started with the simple aim of ‘Succeeding with SOA’ then the reasons for the projects existence probably need to be questioned.   There are a number of things we can be sure of: ·         An organisation will have a number of disparate IT systems ·         Some of these systems will have redundant data and functionality ·         Integration will give considerable ROI ·         Integration will already be under way. ·         Services will already exist in the organisation ·         These services will be inconsistent in their implementation and in their governance   So there are three goals here: 1.       Alignment between the business and IT 2.     Integration of disparate systems 3.     Management of services.   2 and 3 are going to happen,  in fact they must happen if any degree of return is expected from the IT department. Ignoring 1 is considered a typical mistake in SOA implementations, as it ignores the business implications. However, the business implication of this approach is the money saved in more efficient IT processes. 2 and 3 are ongoing, and they will continue happening, even if a large project to produce a SOA metamodel is started. The result will then be an unstructured cackle of services, and a metamodel that is already going out of date. So we get stuck in and rebuild our services so that they match the metamodel, with the far reaching consequences that this will have on all our LOB systems are current. Lets imagine that this actually works ( how often do we rip and replace working software because it doesn't fit a certain pattern? Never -that's the point of integration), we will now be working with a metamodel that is out of date, and most likely incomplete if the organisation is large.      Accepting that an object can have more than one model over time, with perhaps more than one model being  at any given time will help us realise the limitations of the top down model. It is entirely normal , and perhaps necessary, for an organisation to be able to view an entity from different perspectives.   So, instead of trying to constantly force these goals in a straight line, why not let them happen in parallel, and manage the changes in each layer.     If  company A has chosen to model their business processes and create a business architecture, there will be a reason behind this. Often the aim is to make the business more flexible and able to cope with change, through alignment between the business and the IT department.   If company B’s IT department recognizes the problem of wild services springing up everywhere, and decides to do something about it, by designing a platform and processes for the introduction of services, is this not a valid approach?   With the hybrid approach, it is recommended that company A begin deploying services as quickly as possible. Based on models that are clearly incomplete, and which will therefore change rapidly and often in the near future. Natural business evolution will also mean that the models can be guaranteed to change in the not so near future. To ‘Succeed with SOA’ Company B needs to go back to the drawing board and start modeling processes and objects. So, in effect, we are telling business analysts to start developing code based on a model they are unsure of, and telling programmers to ignore the obvious and growing problems in their IT department and start drawing lines and boxes.     Could the problem be that there are two different problem domains? And the whole concept of SOA as it being described by clever salespeople today creates an example of oft dreaded ‘tight coupling’ between these two domains?   Could it be that we have taken two large problem areas, and bundled the solution together in order to create a magic bullet? And then convinced ourselves that the bullet actually exists?   Company A wants to have a closer relationship between the business and its IT department, in order to become a more flexible organization. Company B wants to decrease the maintenance costs of its IT infrastructure. If both companies focus on succeeding with SOA, then they aren’t focusing on their actual goals.   If Company A starts building services from incomplete models, without a gameplan, they will end up in the same situation as company B, with wild services. If company B focuses on modeling, they could easily end up with the same problems as company A.   Now we have two companies, who a short while ago had one problem each, that now have two problems each. This has happened because of a focus on ‘Succeeding with SOA’, rather than solving the problem at hand.   This is not to suggest that the two problem domains are unrelated, a strategy that encompasses both will obviously be good for the organization. But only if the organization realizes this and can develop such a strategy. This strategy cannot be bought in a box.       Anyone who has worked with SOA for a while will be used to analyzing the solutions to a problem and judging the solution’s level of coupling. If we have two applications that each perform separate functions, but need to communicate with each other, we create a integration layer between them, perhaps with a service, but we do all we can to reduce the dependency between the two systems. Using the same approach, we can separate the modeling (business architecture) and the service hosting (technical architecture).     The business architecture describes the processes and business objects in the business domain.   The technical architecture describes the hosting and management and implementation of services.   The glue that binds these together, the integration layer in our analogy, is the service contract, where the operations map the processes to their technical implementation, and the messages map business concepts to software objects in the implementation.   If we reduce the coupling between these layers, we should be able to allow developers to develop services, and business analysts to develop models, without the changes rippling through from one side to the other.   This would allow company A to carry on modeling, and company B to develop a service platform, each achieving their intended goal, without necessarily creating the problems seen in pure top down or bottom up approaches. Company B could then at a later date map their service infrastructure to a unified model, and company A could carry on modeling, insulating deployed services from changes in the ongoing modeling.   How do we do this?  The concept of service virtualization has been around for a while, and is instantly realizable in Microsoft’s Managed Services Engine. Here we can create a layer of virtual services, which represent the business analyst’s view, presenting uniform contracts to the outside world. These services can then transform and route messages to the actual service implementations. I like to think of the virtual services with their beautifully modeled interfaces as ‘SOA services’, and the implementations as simple integration ‘adapter’ services providing an interface to a technical implementation. The Managed Services Engine also provides policy based control over services, regardless of where they are deployed, simplifying handling of security, logging, exception handling etc.   This solves a big problem. The pressure to deliver services quickly is always there in projects. It is very important to quickly show value when implementing service architectures. There is also pressure to deliver quality, and you can’t easily do both at the same time. This approach allows quick delivery with quality increasing over time, allowing modeling and service development to occur in parallel and independent of each other. The link between business modeling and service implementation is not one that is obvious to many organizations, and requires a certain maturity to realize and drive forward. It is also completely possible that a company can benefit from one without the other, even if this approach is frowned upon today, there are many companies doing so and seeing ROI.   Of course there are disadvantages to this. The biggest one being the transformations necessary between the virtual interfaces and the service implementations. Bad choices in developing the services in the service implementation could mean that it is impossible to map the modeled processes to the implementation with redevelopment of the service. In many cases the architect will not have a choice here anyway, as proprietary systems are often delivered with predeveloped services. The alternative is to wait until the model is finished and then build the service according the model. However, if that approach worked we wouldn’t be having this discussion! And even when it does work, natural business evolution will mean that the two concepts (model and implementation) will immediately start to drift away from each other, so coupling them tightly together so that they are forever bound to the model that only applies at the time of the modeling work will not really achieve a great deal. Architecture is all about trade offs, and here a choice has to be made. The choice is between something will initially be of low quality but will work, or something that may well be impossible to achieve in most situations.         In conclusion, top-down is a natural approach for business analysts, and bottom-up  is a natural approach for developers. Instead of trying to force something on both that neither want, and which has not shown itself to be successful,  why not let them get on with their jobs, and let an enterprise architect coordinate the processes?

    Read the article

  • Geekswithblogs.net | Congrats to the new and renewed MVPs

    - by Geekswithblogs Administrator
    We just wanted to send a shout out to all those who have entered or have been renewed into the MVP program. I always wondered why they wouldn’t move the April date off of April Fool’s Day cause that would be an interesting email to get on April 1. If you are a GWB blogger and an MVP but your name does not have an MVP logo next to it on the homepage, let us know via support and we will get you added. Related Tags: Geekswithblogs.net, MVP, Microsoft

    Read the article

  • Telesharp: An application metadata repository that enables true agility in enterprise .NET applications.

    - by Vishal
    Tellago Studios proudly announces its newest product, a third one within a year of time : TELESHARP .NET Configuration Management has always been a nightmare for any enterprise. TeleSharp is an innovative product that addresses the most common challenges of .NET applications in the enterprise. After years of struggle developing and managing large .NET applications, we decided to create a tool that makes .NET applications truly agile. You can read more about Telesharp and what difference it can make into your enterprise. Also if you want to see Telesharp in action, check the videos about it. Click here to get more information about TeleSharp trial version! Click here to register for the TeleSharp webinar on July 6th from 2PM - 3PM EST.   -Vishal

    Read the article

  • Override an IOCTL Handler in PQOAL

    - by Kate Moss' Big Fan
    When porting or creating a BSP to a new platform, we often need to make change to OEMIoControl or HAL IOCTL handler for more specific. Since Microsoft introduced PQOAL in CE 5.0 and more and more BSP today leverages PQOAL to simplify the OAL, we no longer define the OEMIoControl directly. It is somehow analogous to migrate from pure Windows SDK to MFC; people starts to define those MFC handlers and forgot the WinMain and the big message loop. If you ever take a look at the interface between OAL and Kernel, PUBLIC\COMMON\OAK\INC\oemglobal.h, the pfnOEMIoctl is still there just as the entry point of Windows Program is WinMain since day one. (For those may argue about pfnOEMIoctl is not OEMIoControl, I will encourage you to dig into PRIVATE\WINCEOS\COREOS\NK\OEMMAIN\oemglobal.c which initialized pfnOEMIoctl to OEMIoControl. The interface is just to split OAL and Kernel which no longer linked to one executable file in CE 6, all of the function signature is still identical) So let's trace into PQOAL to realize how it implements OEMIoControl and how can we override an IOCTL handler we interest. First thing to know is the entry point (just as finding the WinMain in MFC), OEMIoControl is defined in PLATFORM\COMMON\SRC\COMMON\IOCTL\ioctl.c. Basically, it does nothing special but scan a pre-defined IOCTL table, g_oalIoCtlTable, and then execute the handler. (The highlight part) Other than that is just for error handling and the use of critical section to serialize the function. BOOL OEMIoControl(     DWORD code, VOID *pInBuffer, DWORD inSize, VOID *pOutBuffer, DWORD outSize,     DWORD *pOutSize ) {     BOOL rc = FALSE;     UINT32 i; ...     // Search the IOCTL table for the requested code.     for (i = 0; g_oalIoCtlTable[i].pfnHandler != NULL; i++) {         if (g_oalIoCtlTable[i].code == code) break;     }     // Indicate unsupported code     if (g_oalIoCtlTable[i].pfnHandler == NULL) {         NKSetLastError(ERROR_NOT_SUPPORTED);         OALMSG(OAL_IOCTL, (             L"OEMIoControl: Unsupported Code 0x%x - device 0x%04x func %d\r\n",             code, code >> 16, (code >> 2)&0x0FFF         ));         goto cleanUp;     }            // Take critical section if required (after postinit & no flag)     if (         g_ioctlState.postInit &&         (g_oalIoCtlTable[i].flags & OAL_IOCTL_FLAG_NOCS) == 0     ) {         // Take critical section                    EnterCriticalSection(&g_ioctlState.cs);     }     // Execute the handler     rc = g_oalIoCtlTable[i].pfnHandler(         code, pInBuffer, inSize, pOutBuffer, outSize, pOutSize     );     // Release critical section if it was taken above     if (         g_ioctlState.postInit &&         (g_oalIoCtlTable[i].flags & OAL_IOCTL_FLAG_NOCS) == 0     ) {         // Release critical section                    LeaveCriticalSection(&g_ioctlState.cs);     } cleanUp:     OALMSG(OAL_IOCTL&&OAL_FUNC, (L"-OEMIoControl(rc = %d)\r\n", rc ));     return rc; }   Where is the g_oalIoCtlTable? It is defined in your BSP. Let's use DeviceEmulator BSP as an example. The PLATFORM\DEVICEEMULATOR\SRC\OAL\OALLIB\ioctl.c defines the table as const OAL_IOCTL_HANDLER g_oalIoCtlTable[] = { #include "ioctl_tab.h" }; And that leads to PLATFORM\DEVICEEMULATOR\SRC\INC\ioctl_tab.h which defined some of IOCTL handler but others are defined in oal_ioctl_tab.h which is under PLATFORM\COMMON\SRC\INC\. Finally, we got the full table body! (Just like tracing MFC, always jumping back and forth). The format of table is very straight forward, IOCTL code, Flags and Handler Function // IOCTL CODE,                          Flags   Handler Function //------------------------------------------------------------------------------ { IOCTL_HAL_INITREGISTRY,                   0,  OALIoCtlHalInitRegistry     }, { IOCTL_HAL_INIT_RTC,                       0,  OALIoCtlHalInitRTC          }, { IOCTL_HAL_REBOOT,                         0,  OALIoCtlHalReboot           }, The PQOAL scans through the table until it find a matched IOCTL code, then invokes the handler function. Since it scans the table from the top which means if we define TWO handler with same IOCTL code, the first one is always invoked with no exception. Now back to the PLATFORM\DEVICEEMULATOR\SRC\INC\ioctl_tab.h, with the following table { IOCTL_HAL_INITREGISTRY,                   0,  OALIoCtlDeviceEmulatorHalInitRegistry     }, ... #include <oal_ioctl_tab.h> Note the IOCTL_HAL_INITREGISTRY handler are defined in both BSP's local ioctl_tab.h and the common oal_ioctl_tab.h, but due to BSP's local handler comes before "#include <oal_ioctl_tab.h>" so we know the OALIoCtlDeviceEmulatorHalInitRegistry always get called. In this example, the DeviceEmulator BSP overrides the IOCTL_HAL_INITREGISTRY handler from OALIoCtlHalInitRegistry to OALIoCtlDeviceEmulatorHalInitRegistry by manipulating the g_oalIoCtlTable table. (In some point of view, it is similar to message map in MFC) Please be aware, when you override an IOCTL handler in PQOAL, you may want to clone the original implementation to your BSP and change to meet your need. It is recommended and save you the redundant works but remember to rename the handler function (Just like the DeviceEmulator it changes the name of OALIoCtlHalInitRegistry to OALIoCtlDeviceEmulatorHalInitRegistry). If you don't change the name, linker may not be happy (due to name conflict) and the more important is by using different handler name, you could always redirect the handler back to original one. (It is like the concept of OOP that calling a function in base class; still not so clear? I am goinf to show you soon!) The OALIoCtlDeviceEmulatorHalInitRegistry setups DeviceEmulator specific registry settings and in the end, if everything goes well, it calls the OALIoCtlHalInitRegistry (PLATFORM\COMMON\SRC\COMMON\IOCTL\reginit.c) to do the rest.     if(fOk) {         fOk = OALIoCtlHalInitRegistry(code, pInpBuffer, inpSize, pOutBuffer,             outSize, pOutSize);     } Now you got the picture, whenever you want to override an IOCTL hadnler that is implemented in PQOAL just Clone the handler function to your BSP as a template. Simple name change for the handler function, and a name change in the IOCTL table header file that maps the IOCTL with the function Implement your IOCTL handler and whenever you need to redirect it back just calling the original handler function. It is the standard way of implementing a custom IOCTL and most Microsoft developers prefer. The mapping of IOCTL routine to IOCTL code is platform specific - you control the header file that does that mapping.

    Read the article

  • An unexpected pleasure from Windows 8

    - by eddraper
    This post is certainly on the more nuanced side of all the goodness that is Windows 8, but it’s about something that’s really changed my PC usage experience for the better. Besides being a geek and the enjoying all the techno-thrills and chills that go along with sitting in front of a keyboard all day, I really love the forest.  Trees have always been special to me.  The feeling of being in the forest with all the sounds and ambiance, the broken light, the fragrance of the air… it’s paradise to me. As I can’t get there often, due to work, and quite often the heat here in Texas, I’ve found something that can at least partially fill the gap…  When you install Windows 8, you’ll have an app called “Naturespace” from http://www.naturespace.com/ .  It boasts a number of predefined loops in what they call “holographic audio.”  They’re essentially high-tech 3D sound fields recorded in natural environments. After checking them out, I really liked the sound of the “Daybreak” selection: A great benefit is that you don’t have to be in Metro/Modern/Windows App Store mode, in order to keep the sound playing.  To start the day, I click on Daybreak, start it, then go back to the desktop and fire up VS, Chrome, etc. As I work and play, I’m surrounded by this delightful background ambiance which relaxes me and puts my mind at ease. Give it a try.  I think you’ll like it.  And no, you don’t need ear buds or headphones to get the benefit.

    Read the article

  • Taking a screenshot from within a Silverlight #WP7 application

    - by Laurent Bugnion
    Often times, you want to take a screenshot of an application’s page. There can be multiple reasons. For instance, you can use this to provide an easy feedback method to beta testers. I find this super invaluable when working on integration of design in an app, and the user can take quick screenshots, attach them to an email and send them to me directly from the Windows Phone device. However, the same mechanism can also be used to provide screenshots are a feature of the app, for example if the user wants to save the current status of his application, etc. Caveats Note the following: The code requires an XNA library to save the picture to the media library. To have this, follow the steps: In your application (or class library), add a reference to Microsoft.Xna.Framework. In your code, add a “using” statement to Microsoft.Xna.Framework.Media. In the Properties folder, open WMAppManifest.xml and add the following capability: ID_CAP_MEDIALIB. The method call will fail with an exception if the device is connected to the Zune application on the PC. To avoid this, either disconnect the device when testing, or end the Zune application on the PC. While the method call will not fail on the emulator, there is no way to access the media library, so it is pretty much useless on this platform. This method only prints Silverlight elements to the output image. Other elements (such as a WebBrowser control’s content for instance) will output a black rectangle. The code public static void SaveToMediaLibrary( FrameworkElement element, string title) { try { var bmp = new WriteableBitmap(element, null); var ms = new MemoryStream(); bmp.SaveJpeg( ms, (int)element.ActualWidth, (int)element.ActualHeight, 0, 100); ms.Seek(0, SeekOrigin.Begin); var lib = new MediaLibrary(); var filePath = string.Format(title + ".jpg"); lib.SavePicture(filePath, ms); MessageBox.Show( "Saved in your media library!", "Done", MessageBoxButton.OK); } catch { MessageBox.Show( "There was an error. Please disconnect your phone from the computer before saving.", "Cannot save", MessageBoxButton.OK); } } This method can save any FrameworkElement. Typically I use it to save a whole page, but you can pass any other element to it. On line 7, we create a new WriteableBitmap. This excellent class can render a visual tree into a bitmap. Note that for even more features, you can use the great WriteableBitmapEx class library (which is open source). On lines 9 to 16, we save the WriteableBitmap to a MemoryStream. The only format supported by default is JPEG, however it is possible to convert to other formats with the ImageTools library (also open source). Lines 18 to 20 save the picture to the Windows Phone device’s media library. Using the image To retrieve the image, simply launch the Pictures library on the phone. The image will be in Saved Pictures. From here, you can share the image (by email, for instance), or synchronize it with the PC using the Zune software. Saving to other platforms It is of course possible to save to other platforms than the media library. For example, you can send the image to a web service, or save it to the isolated storage on the device. To do this, instead of using a MemoryStream, you can use any other stream (such as a web request stream, or a file stream) and save to that instead. Hopefully this code will be helpful to you! Happy coding, Laurent   Laurent Bugnion (GalaSoft) Subscribe | Twitter | Facebook | Flickr | LinkedIn

    Read the article

  • Search engine optimization Links

    - by Michael Freidgeim
    Below there are a few links, that I used for my Search engine optimization research:     http://websearch.about.com/od/designforsearch/ss/tendesigntips.htm     Keyword Selection Guidelines   Where To Use Keywords  Google Search Engine Optimization http://websearch.about.com/od/keywordsandphrases/a/sitedesign.htm     http://en.wikipedia.org/wiki/Search_engine_optimization       http://www.google.com/support/webmasters/bin/answer.py?answer=35291

    Read the article

  • Building a personal website using Silverlight.

    - by mbcrump
    I’ve always believed that as a developer you should always have a hobby project going on. I think a hobby project needs to contain at least one of following things: Something that you have never done before. Something that you are interested in. Something that you can work on in your spare time without affecting your *paying* job. I decided my hobby project would be an entire web application written in Silverlight that could be used as a self-promotion/marketing tool. This goal of the site is to provide information on the work that I’ve done to conferences, future employers and anyone else that wanted to learn more about me. Before I go any further, if you just want to check out the site then it is located at http://michaelcrump.info. So, what did I use to create it? MVVM Light – I’m a big fan of this software. The item and project templates plus code snippets make this a huge win for any SL/WPF/WP7 application. Jetpack Theme by Microsoft – I suck at designing so I used this template to help speed up this project. ComponentOne 3rd Party Controls – I have a license and really like several of their products. A User Control that Jeremy Likness created called DynamicXaml (used with his permission). I had created my own version of this a while back, but Jeremy’s implementation was simply better. Main Page – Designed to create my “brand”. This was built for a quick glimpse of who I am and what do I do.  Blog – The best marketing tool for a developer is their blog. I decided to go with an HTML page displaying my site and the user could pop into full-screen if desired. I also included my feed and Silverlight-Zone. (Another site I work on) Online – This page links to sites that I have been featured on as well as community involvement and awards. I also have a web service that I can update this information without re-compiling the Silverlight App. Projects – I’ve been wanting to use a CoverFlow for a really long time now. =) This page list several hobby projects as well as a few professional projects.  Resume Page – This page only exist because I got tired of sending companies my resume in e-mail. I can now provide a deep link to this page and the recruiter can print, search or save my resume. The PDF of my resume exist in a folder that I can easily update without recompiling the app. Contact Page – Just a contact page with a web service that sends the email. The Send button becomes disabled after a successful send. I thought of adding captcha to this page but in the end didn’t think it was worth it. Looking back at this app, I’m happy with how it turned out. I love Silverlight and I am already thinking of my next hobby project. (Thinking another Windows Phone 7 app or MVC3).  Subscribe to my feed

    Read the article

  • jQuery "Auto Post-back" Select/Drop-Down List

    - by Doug Lampe
    I have one common piece of jQuery code which I use to submit a form any time the selection changes on a drop-down list (HTML select tag).  This is similar to setting AutoPostBack = true in ASP.Net.  I use a single CSS class (autoSubmit) to annotate that I want the drop-down to force the form to submit on change so the HTML looks something like this: <select id="myAutoSubmitDropDown" name="myAutoSubmitDropDown" class="autoSubmit">     <option value="1">Option 1</option>     <option value="2">Option 2</option> </select> Then the following jQuery will look for any element with this CSS class and submit the parent form when the value is changed: function wireUpAutoSubmit() {   $(".autoSubmit").each(function (index) {     $(this).change(function () {       $(this).closest('form').submit();     })   }); } I put this in a separate function since I might need to wire this up explicitly after an ajax call.  Therefore I use the following code to set this method to fire when the DOM is loaded: $(document).ready(function () {   wireUpAutoSubmit(); });

    Read the article

  • Save the dates &ndash; Tech.Days 2011 23rd to 25th of May in London

    - by Eric Nelson
    In May Microsoft UK (and specifically my group) will be delivering Tech.Days – a week of day long technical events plus evening activities. We will be covering Windows Phone 7, Silverlight, IE 9, Windows Azure Platform and more. I’m working right now on the details of what we will be covering around the Windows Azure Platform – and it is shaping up very nicely. There is a little more detail over on TechNet – but for the moment, keep the dates clear if you can. P.S. I think the above is called a “teaser” in marketing speak.

    Read the article

  • Multiline Replacement With Visual Studio

    - by Alois Kraus
    I had to remove some file headers in a bigger project which were all of the form #region File Header /*[ Compilation unit ----------------------------------------------------------       Name            : Class1.cs       Language        : C#     Creation Date   :      Description     : -----------------------------------------------------------------------------*/ /*] END */ #endregion I know that would be a cool thing to write a simple C# program use a recursive file search, read all lines skip the first n lines and write the files back to disc. But I wanted to test things first before I ruin my source files with one little typo. There comes the Visual Studio Search and Replace in Files dialog into the game. I can test my regular expression to do a multiline match with the Find button before actually breaking anything. And if something goes wrong I have the Undo button.   There is a nice blog post from Paulo Morgado online who deals with Multiline Regular expressions. The Visual Studio Regular expressions are non standard so you have to adapt your usual Regex know how to the other patterns. The pattern I cam finally up with is \#region File Header:b*(.*\n)@\#endregion The Regular expression can be read as \#region File Header Match “#region File Header” \# Escapes the # character since it is a quantifier. :b* After this none or more spaces or tabs can follow (:b stands for space or tab) (.*\n)@ Match anything across lines in a non greedy way (the @ character makes it non greedy) to prevent matching too much until the #endregion somewhere in our source file. \#endregion Match everything until “#endregion” is found I had always knew that Visual Studio can do it but I never bothered to learn the non standard Regex syntax. This is powerful and it is inside Visual Studio since 2005!

    Read the article

  • SCSF for Visual Studio 2010

    - by Anthony Trudeau
    The Smart Client Software Factory (SCSF) for Visual Studio 2010 was uploaded tonight.  You can get it, the source code, and the documentation on the patterns & practices page. Note: Do not forget to "unblock" the documentation (CHM) file after you download it.  To unblock it right click the file, choose Properties, and click the Unblock button.

    Read the article

  • Specs, Form and Function – What am I Missing?

    - by Barry Shulam
    0 0 1 628 3586 08041 29 8 4206 14.0 Normal 0 false false false EN-US JA X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:12.0pt; font-family:Cambria; mso-ascii-font-family:Cambria; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Cambria; mso-hansi-theme-font:minor-latin;} Friday October 26th the Microsoft Surface RT arrived at the office.  I was summoned to my boss’s office for the grand unpacking.  If I had planned ahead I could have used my iPhone 4 to film the event and post it on YouTube however the desire to hold the device and turn it ON was more inviting than becoming a proxy reviewer for Engadget’s website.  1980 was the first time we had a personal computer in our house.  It was a  Kaypro computer. It weighed 29 pounds more than any persons lap could hold.  Then the term “portable computer” meant you could remove it from the building and take it else where.  Today I am typing on this entry on a Macbook Air which weighs 2.38 pounds. This morning Amazons front page main title is: “Much More for Much Less” I was born at the right time to start with the CPM operating system on the Kaypro thru the DOS, Windows, Linux, Mac OSX and mobile phone operating systems and languages.  If you are not aware Technology is moving at a rapid pace.  The New iPad (those who are keeping score – iPad4) is replacing a 7 month old machine the New iPad (iPad 3) I have used and owned many technology devices in my life.  The main point that most of the reader who are in the USA overlook is the fact that we are in the USA.  The devices we purchase have a great digital garden to support them.  The Kaypro computer had a 7-inch screen.  It was a TV tube with two colors – Black and Green.  You could see the 80-column screen flicker with characters – have you every played Pac-Man emulated on the screen with the ABC characters. Traveling across the world you will find that not all apps on your device will function as they did back home because they are not offered outside of your country of origin. I think the main question a buyer of technology should be asking is Function.  The greatest Specs with out function limit you.  The most beautiful form with out function is the same as a crystal vase on your shelf – not a good cereal bowl in the morning. Microsoft Surface RT, Amazon Kindle Fire and Apple iPad all great devices in their respective customers hands. My advice for those looking to purchase on this year:  If the device is your only technology device you buy what you WANT and LIKE. Consider this parallel universe if its not your only device?  Ever go shopping for clothing, shoes, and accessories with your wife, girlfriend, sister or mother?  If you listen carefully you will hear the little voices coming out of there heads saying:  “This goes well with that and I can use it also with that outfit” ”Do you think this clashes with that?”  “Ohh I love how that combination looks on you”.  Portable devices such as tablets and computers can offer a whole lot more when they are combined with the digital echo system you have at home and the manufacturer offers online. Pros of each Device: Microsoft Surface RT: There is a new functionality named SmartGlass which will let you share the content off your tablet to your XBOX 360.  Microsoft office is loaded on the tablet.  You can have more than one user profile on the tablet if you share it with others.   Amazon Kindle or Kindle HD: If you are an Amazon consumer with an annual Amazon Prime service you can consume videos and read books off the Amazon site.  Its the cheapest device.  Its a step up from the kindle reader in many ways.   Apple Ipad or Ipad mini: Over 270 Thousand applications.  Airplay permits you the ability to share to your TV screen. If you are a cord cutter (a person who gets their entertainment content over the web or air vs Cable Providers) the Airplay or Smart glass are a huge bonus.  iPad mini or not: The mini will fit in a purse where the larger one will not.  Its lighter which makes it nice to hold for prolonged periods.  It has an option for LTE wireless which non of the other sub 9 inch tables offer.  The screen is non retina which means the applications are smaller.  Speaking with individuals who are above 50 in age that wear glasses they retina does not make a difference for them however they prefer the larger iPad over the new mini.   Happy Shopping this Channuka Season.   The Kosher Coder.   Follow me on twitter @KosherCoder

    Read the article

  • Security Issue in LinkedIn &ndash; View any 3rd profile without a premium account.

    - by Shaurya Anand
    Originally posted on: http://geekswithblogs.net/shauryaanand/archive/2013/06/25/153230.aspxI discovered this accidently when my wife forwarded a contact on LinkedIn from her tablet, using the mobile interface of the website. On opening the contact on my desktop, I was surprised to see, I need to upgrade my account to view the contact. Doing some research along with my wife, I found this simple security vulnerability from LinkedIn that can let anyone view a contact’s full profile even when you have a “not upgraded” LinkedIn account and that the contact is a “3rd + Everyone Else”. Here’s an example of what I am talking about. I just made a random search on LinkedIn for a contact whose name starts with Sacha. Do note, this is just a walkthrough and I am not publicizing any Sacha. I check the “3rd + Everyone Else” and find a “LinkedIn Member”. On clicking this person’s profile to view, I am presented with the following page, asking me to upgrade. Make a note of this page’s web address and you get the profile id from it. For example, for this contact, the page address is: http://www.linkedin.com/profile/view?id=868XXX35 The Profile Id for this contact is 868XXX35. Now, open following page where the Profile Id is the same as the one we grabbed a moment earlier. https://touch.www.linkedin.com/?#profile/868XXX35 The mobile page exposes this contact information and you even get the possibility to connect to this person without an introduction mail (InMail). I hope someone from LinkedIn sees and issues a fix for this. I am pretty sure, it’s something that they don’t want the user to do without purchasing an upgrade package.

    Read the article

  • BizTalk: Suspend shape and Convoy

    - by Leonid Ganeline
    Part 1: BizTalk: Instance Subscription and Convoys: Details This is a Part 2. I am discussing the Suspend shape together with Convoys and going to show that using them together is undesirable. In previous article we investigated the Instance Subscriptions and how they could create situation with dangerous zones in processing.  Let' start with Suspend shape. [See the BizTalk Help] "You can use the Suspend shape to make an orchestration instance stop running until an administrator explicitly intervenes, perhaps to reflect an error condition that requires attention beyond the scope of the orchestration. All of the state information for the orchestration instance is saved, and will be reinstated when the administrator resumes the orchestration instance. When an orchestration instance is suspended, an error is raised. You can specify a message string to accompany the error to help the administrator diagnose the situation."   On the Suspend shape the orchestration is stopped in the Suspended (Resumable) state. Next we have two choices, one is to resume and the second is to terminate the orchestration. Is the orchestration is stopped or unenlisted? You don't find a note about it anywhere. The fact is the Orchestration is stopped and still enlisted. It is very important. So again, the suspended orchestration can be resumed or terminated. The moment when the operator or the operation script resumes or terminates can be far away. It is also important too. Let's go back to the case from previous article. Make sure you notice the convoy and the dangerous zone after the last Receive shape.     Now we have a Suspend shape inside the orchestration. The first orchestration instance is suspended. Next messages start new orchestration instance and have been consumed by this orchestration, right? Wrong! The orchestration is stopped on the Suspend shape but still enlisted. Now the dangerous zone, the "zombie zone" is expanded to the interval between the last receive and the moment of termination or end of the orchestration. The new orchestration instance for this convoy will not start till this moment. How fast operator finds out this suspended orchestration? Maybe hours or days. All this time orchestration is still enlisted and gathering the convoy messages. We can resume the orchestration but we cannot resume these messages together with orchestration. Seems the name Suspended of the orchestration is misleading. The orchestration can be in the Started (and Enlisted)/Stopped (and Enlisted)/Unenlisted state. The Suspend shape switches orchestration exactly to the Stopped state. The Stop name would describe the shape clearly and unambiguously and the Stopped state would describe the orchestration. Imagine we can change the BizTalk. The Orchestration editor can search these situations and returns the compile error. In similar case the Orchestration Editor forces us to use only ordered delivery port with convoys. The run-time core can force the orchestration with convoy be suspended in Unresumable state, that means the run-time unenlists the orchestration instance subscriptions. The Suspend shape name should be changed. The "Suspend" name is misleading. The "Stop" name is clear and unambiguous. The same for the orchestration state, it should be “Stopped” not “Suspended (Resumable)”.   Conclusion:  It is not recommended using a Suspend shape together with the convoy orchestrations.

    Read the article

  • SBUG --> UK Connected Systems User Group

    - by Michael Stephenson
    Following a recent user group meeting we have decided that the UK SOA/BPM User Group will be renamed to the UK Connected Systems User Group.  The reasons for this are as follows: 1. Other user groups who cover the same topics as us are all called something similar 2. We feel the name change will help to increase user group membership The focus and topics of the user group will remain the same.

    Read the article

  • Fight for your rights as a video gamer.

    - by Chris Williams
    Soon, the U.S. Supreme Court may decide whether to hear a case that could have a lasting impact on computer and video games. The case before the Court involves a law passed by the state of California attempting to criminalize the sale of certain computer and video games. Two previous courts rejected the California law as unconstitutional, but soon the Supreme Court could have the final say. Whatever the Court's ruling, we must be prepared to continue defending our rights now and in the future. To do so, we need a large, powerful movement of gamers to speak with one voice and show that we won't sit back while lawmakers try to score political points by scapegoating video games and treating them differently than books, movies, and music. If the Court decides to hear the case, we're going to need thousands of activists like you who can help defend computer and video games by writing letters to editors, calling into talk radio stations, and educating Americans about our passion for and appreciation of computer and video games. You can help build this movement right now by inviting all your friends and fellow gamers to join the Video Game Voters Network. Use our simple tool to send an email to everyone you know asking them to stand up for gaming rights: http://videogamevoters.org/movement You can also help spread the word through Facebook and Twitter, or you can simply forward this email to everyone you know and ask them to sign up at videogamevoters.org. Time after time, courts continue to reject politicians' efforts to restrict the sale of computer and video games. But that doesn't mean the politicians will stop trying anytime soon -- in fact, it means they're likely to ramp up their efforts even more. To stop them, we must make it clear that gamers will continue to stand up for free speech -- and that the numbers are on our side. Help make sure we're ready and able to keep fighting for our gaming rights. Spread the word about the Video Game Voters Network right now: http://videogamevoters.org/movement Thank you. -- Video Game Voters Network

    Read the article

  • Two new Visual WebGui released simultaneously

    - by Webgui
    Two new Visual WebGui versions were released simultaneously. Downloads are available here. The first is a revision to the beta version of the upcoming 6.4 which brings all-new developer/designer interface and capabilities. The second release is the latest enhancement of the current 6.3.x version. The new 6.3.15 includes the following changes over 6.3.14: Breaking Changes [1] ---------------------------------------------------------------------------------------------- VWG-6132 - [v6.3.15] Deploy language resource assemblies next to the Gizmox.WebGUI.Forms assembly location Installation puts the resources in the assemblies folder rather thatn the GAC. That way they are copied to the output folder of the app, thus enabling their deployment to the server. Bugs fixes [7] ---------------------------------------------------------------------------------------------- VWG-5714 - Help.ShowHelp of .CHM file with images should show the images VWG-6132 - [v6.3.15] Deploy language resource assemblies next to the Gizmox.WebGUI.Forms assembly location VWG-6401 - Radiobutton: The DoubleClick event should fire. VWG-6409 - The Hourglass (white/blue) Spinner icon should not display to the left on LTR cultures VWG-6452 - Calling/Causing an update on a scrollable container should not reset the scroll position. VWG-6463 - Redrawing a scrollable container does not preserve last scrolling position. VWG-6867 - Listbox: The Items selection in run time should be work correctly Enhancements [1] ---------------------------------------------------------------------------------------------- VWG-6610 - Visifire - Add a click event handler on the graph

    Read the article

  • Redgate ANTS Performance Profiler

    - by Jon Canning
    Seemingly forever I've been working on a business idea, it's a REST API delivering content to mobiles, and I've never really had much idea about its performance. Yes, I have a suite of unit tests and integration tests, but these only tell me that it works, not how well it works. I was also about to embark on a major refactor, swapping the database from MongoDB to RavenDB, and was curious to see if that impacted performance at all, so I needed a profiler that supported IIS Express that I can run my integration tests against, and Google gave me:   http://www.red-gate.com/supportcenter/content/ANTS_Performance_Profiler/help/7.4/app_iise   Excellent. Following the above guide an instance of IIS Express and is launched, as is Internet Explorer. The latter eventually becomes annoying, I would like to decide whether I want a browser opened, but thankfully the guide is wrong in that it can be closed and profiling will continue. So I ran my tests, stopped profiling, and was presented with a call tree listing the endpoints called and allowing me to drill down to the source code beneath.     Although useful and fascinating this wasn't what I was expecting to see, I was after the method timings from the entire test suite. Switching Show to Methods Grid presented me with a list of my methods, with the slowest lit up in red at the top. Marvellous.     I did find that if you switch to Methods Grid before Call tree has loaded, you do not get the red warnings.   StructureMap was very busy, and next on the list was a request filter that I didn't expect to be so overworked. Highlighting it, the source code was presented to me in the bottom window with timings and a nice red indicator to show me where to look. Oh horror, that reflection hack I put in months ago, I'd forgotten all about it. It was calling Validate<T>() which in turn was resolving a validator from StructureMap. Note to self, use //TODO: when leaving smelly code lying around.     Before refactoring, remember to Save Profile Results from the File menu. Annoyingly you are not prompted to save your results when exiting, and using Save Project will only leave you thankful that you have version control and can go back in time to run your tests again.   Having implemented StructureMap’s ForGenericType, I ran my tests again and:     Win, thankyou ANTS (What does ANTS stand for BTW?)   There's definitely room in my toolbox for a profiler; what started out as idle curiosity actually solved a potential problem. When presented with a new codebase I can see enormous benefit from getting an overview of the pipeline from the call tree before drilling into the code, and as a sanity check before release it gives a little more reassurance that you've done your best, and shows you exactly where to look if you haven’t.   Next I’m going to profile a load test.

    Read the article

< Previous Page | 100 101 102 103 104 105 106 107 108 109 110 111  | Next Page >