Search Results

Search found 13358 results on 535 pages for 'measurement studio'.

Page 404/535 | < Previous Page | 400 401 402 403 404 405 406 407 408 409 410 411  | Next Page >

  • ApiChange Is Released!

    - by Alois Kraus
    I have been working on little tool to simplify my life and perhaps yours as developer as well. It is basically a command line tool that allows you to execute queries on your compiled .NET code base. The main purpose is to find out how big the impact of an api change would be if you changed this or that.  Now you can do high level operations like Diff public types for breaking changes. Who uses a method? Who uses a type? Who uses implements an interface? Who references me? What format has the binary  (32/64, Managed C++, Pure IL, Unmanaged)? Search for all event subscribers and unsubscribers. A unique feature is to check for event subscription imbalances. Forgotten event subscriptions are the 90% cause of managed memory leaks. It is done at a per class level. If one class does subscribe to one event more often than it does unsubscribe it is treated as possible event subscription imbalance. Another unique ability is to search for users of string literals which allows you to track users of a string constant which is not possible otherwise. For incremental builds the ShowRebuildTargets command can be used to identify the dependant targets that need a rebuild after you did compile one assembly. It has some heuristics in place to determine the impact of breaking changes and finds out which targets need to be recompiled as well. It has a ton of other features and a an API to access these things programmatically so you can build upon these simple queries create even better tools. Perhaps we get a Visual Studio plug in? You can download it from CodePlex here. It works via XCopy deployment. Simply let it run and check the command line help out. The best feature in my opinion is that the output of nearly all commands can be piped to Excel for further analysis. Since it does read also the pdbs it can show you the source file name and line number as well for all matches. The following picture shows the output of a –WhousesType query. The following command checks where type from BaseLibraryV1.dll are used inside DependantLibV1.dll. All matches are printed out with the reason and matching item along with file and line number. There is even a hyper link to the match which will open Visual Studio. ApiChange -whousestype "*" BaseLibraryV1.dll -in DependantLibV1.dll –excel The "*” is the actual query which means all types. The syntax is the same like in C# just that placeholders are allowed ;-). More info's can be found at the Codeplex Documentation.     The tool was developed in a TDD style manner which means that it is heavily tested and already used by a quite large user base inside the company I do work for. Luckily for you I got the permission to make it public so you take advantage of it. It is fully instrumented with tracing. If you find bugs simply add the –trace command line switch to find out what is failing and send me the output. How is it done? Your first guess might be that it uses reflection. Wrong. It is based on Mono Cecil a free IL parser with a fantastic API to access all internals of a managed assembly. The speed is awesome and to make it even faster I did make the tool heavily multi threaded. The query above did execute in 1.8s with the Excel output. On a rather slow machine I can analyze over 1500 assemblies in less than 40s with a very low memory consumption. The true power of Mono Cecil is that I can load an assembly like any other data file. I have no problems unloading a file but if I would have used reflection I would need to unload a whole AppDomain just to get rid of one assembly in my memory. Just to give you a glimpse how ApiChange.Api.dll can be used I show you one of the unit tests:           public void Can_Find_GenericMethodInvocations_With_Type_Parameters()         { // 1. Create an aggregator to collect our matches             UsageQueryAggregator agg = new UsageQueryAggregator();   // 2. This is the type we want to search for. Load it via the type query             var decimalType = TypeQuery.GetTypeByName(TestConstants.MscorlibAssembly, "System.Decimal");   // 3. register the type query which searches for uses of the Decimal type             new WhoUsesType(agg, decimalType);   // 4. Search for all users of the Decimal type in the DependandLibV1Assembly             agg.Analyze(TestConstants.DependandLibV1Assembly);   // Extract matches and assert             Assert.AreEqual(2, agg.MethodMatches.Count, "Method match count");             Assert.AreEqual("UseGenericMethod", agg.MethodMatches[0].Match.Name);             Assert.AreEqual("UseGenericMethod", agg.MethodMatches[1].Match.Name);         } Many thanks go from here to Jb Evian for the creation of Mono.Cecil. Without this fantastic piece of code it would have been much much harder. There are other options around like the Common Compiler Infrastructure  Metadata Api which should do the same thing but it was not a real option since the Microsoft reader did fail on even simple assemblies (at least in September 2009 this was the case). Besides this I found the CCI Apis much harder to use. The only real competitor was Reflector which does support many things but does not let me access his cool high level analyze commands. So I decided to dig into the IL specs and as a result you can query your compiled binaries from the command line or programmatically. The best thing is you try it out for yourself and give me some feedback what you miss. If you want to contribute or have a cool idea what should be added drop me a mail at A Kraus1@___No [email protected]. There is much more inside the tool I did not talk about it (yet).

    Read the article

  • ANTS CLR and Memory Profiler In Depth Review (Part 2 of 2 &ndash; Memory Profiler)

    - by ToStringTheory
    One of the things that people might not know about me, is my obsession to make my code as efficient as possible. Many people might not realize how much of a task or undertaking that this might be, but it is surely a task as monumental as climbing Mount Everest, except this time it is a challenge for the mind… In trying to make code efficient, there are many different factors that play a part – size of project or solution, tiers, language used, experience and training of the programmer, technologies used, maintainability of the code – the list can go on for quite some time. I spend quite a bit of time when developing trying to determine what is the best way to implement a feature to accomplish the efficiency that I look to achieve. One program that I have recently come to learn about – Red Gate ANTS Performance (CLR) and Memory profiler gives me tools to accomplish that job more efficiently as well. In this review, I am going to cover some of the features of the ANTS memory profiler set by compiling some hideous example code to test against. Notice As a member of the Geeks With Blogs Influencers program, one of the perks is the ability to review products, in exchange for a free license to the program. I have not let this affect my opinions of the product in any way, and Red Gate nor Geeks With Blogs has tried to influence my opinion regarding this product in any way. Introduction – Part 2 In my last post, I reviewed the feature packed Red Gate ANTS Performance Profiler.  Separate from the Red Gate Performance Profiler is the Red Gate ANTS Memory Profiler – a simple, easy to use utility for checking how your application is handling memory management…  A tool that I wish I had had many times in the past.  This post will be focusing on the ANTS Memory Profiler and its tool set. The memory profiler has a large assortment of features just like the Performance Profiler, with the new session looking nearly exactly alike: ANTS Memory Profiler Memory profiling is not something that I have to do very often…  In the past, the few cases I’ve had to find a memory leak in an application I have usually just had to trace the code of the operations being performed to look for oddities…  Sadly, I have come across more undisposed/non-using’ed IDisposable objects, usually from ADO.Net than I would like to ever see.  Support is not fun, however using ANTS Memory Profiler makes this task easier.  For this round of testing, I am going to use the same code from my previous example, using the WPF application. This time, I will choose the ‘Profile Memory’ option from the ANTS menu in Visual Studio, which launches the solution in its currently configured state/start-up project, and then launches the ANTS Memory Profiler to help.  It prepopulates all of the fields with the current project information, and all I have to do is select the ‘Start Profiling’ option. When the window comes up, it is actually quite barren, just giving ideas on how to work the profiler.  You start by getting to the point in your application that you want to profile, and then taking a ‘Memory Snapshot’.  This performs a full garbage collection, and snapshots the managed heap.  Using the same WPF app as before, I will go ahead and take a snapshot now. As you can see, ANTS is already giving me lots of information regarding the snapshot, however this is just a snapshot.  The whole point of the profiler is to perform an action, usually one where a memory problem is being noticed, and then take another snapshot and perform a diff between them to see what has changed.  I am going to go ahead and generate 5000 primes, and then take another snapshot: As you can see, ANTS is already giving me a lot of new information about this snapshot compared to the last.  Information such as difference in memory usage, fragmentation, class usage, etc…  If you take more snapshots, you can use the dropdown at the top to set your actual comparison snapshots. If you beneath the timeline, you will see a breadcrumb trail showing how best to approach profiling memory using ANTS.  When you first do the comparison, you start on the Summary screen.  You can either use the charts at the bottom, or switch to the class list screen to get to the next step.  Here is the class list screen: As you can see, it lists information about all of the instances between the snapshots, as well as at the bottom giving you a way to filter by telling ANTS what your problem is.  I am going to go ahead and select the Int16[] to look at the Instance Categorizer Using the instance categorizer, you can travel backwards to see where all of the instances are coming from.  It may be hard to see in this image, but hopefully the lightbox (click on it) will help: I can see that all of these instances are rooted to the application through the UI TextBlock control.  This image will probably be even harder to see, however using the ‘Instance Retention Graph’, you can trace an objects memory inheritance up the chain to see its roots as well.  This is a simple example, as this is simply a known element.  Usually you would be profiling an actual problem, and comparing those differences.  I know in the past, I have spotted a problem where a new context was created per page load, and it was rooted into the application through an event.  As the application began to grow, performance and reliability problems started to emerge.  A tool like this would have been a great way to identify the problem quickly. Overview Overall, I think that the Red Gate ANTS Memory Profiler is a great utility for debugging those pesky leaks.  3 Biggest Pros: Easy to use interface with lots of options for configuring profiling session Intuitive and helpful interface for drilling down from summary, to instance, to root graphs ANTS provides an API for controlling the profiler. Not many options, but still helpful. 2 Biggest Cons: Inability to automatically snapshot the memory by interval Lack of complete integration with Visual Studio via an extension panel Ratings Ease of Use (9/10) – I really do believe that they have brought simplicity to the once difficult task of memory profiling.  I especially liked how it stepped you further into the drilldown by directing you towards the best options. Effectiveness (10/10) – I believe that the profiler does EXACTLY what it purports to do.  Features (7/10) – A really great set of features all around in the application, however, I would like to see some ability for automatically triggering snapshots based on intervals or framework level items such as events. Customer Service (10/10) – My entire experience with Red Gate personnel has been nothing but good.  their people are friendly, helpful, and happy! UI / UX (9/10) – The interface is very easy to get around, and all of the options are easy to find.  With a little bit of poking around, you’ll be optimizing Hello World in no time flat! Overall (9/10) – Overall, I am happy with the Memory Profiler and its features, as well as with the service I received when working with the Red Gate personnel.  Thank you for reading up to here, or skipping ahead – I told you it would be shorter!  Please, if you do try the product, drop me a message and let me know what you think!  I would love to hear any opinions you may have on the product. Code Feel free to download the code I used above – download via DropBox

    Read the article

  • How to Control Screen Layouts in LightSwitch

    - by ChrisD
    Visual Studio LightSwitch has a bunch of screen templates that you can use to quickly generate screens. They give you good starting points that you can customize further. When you add a new screen to your project you see a set of screen templates that you can choose from. These templates lay out all the related data you choose to put on a screen automatically for you. And don’t under estimate them; they do a great job of laying out controls in a smart way. For instance, a tab control will be used when you select more than one related set of data to display on a screen. However, you’re not limited to taking the layout as is. In fact, the screen designer is pretty flexible and allows you to create stacks of controls in a variety of configurations. You just need to visualize your screen as a series of containers that you can lay out in rows and columns. You then place controls or stacks of controls into these areas to align the screen exactly how you want. If you’re new in Visual Studio LightSwitch, you can see this tutorial. OK, Let’s start with a simple example. I have already designed my data entities for a simple order tracking system similar to the Northwind database. I also have added a Search Data  Screen to search my Products already. Now I will add a new Details Screen for my Products and make it the default screen via the “Add New Screen” dialog: The screen designer picks a simple layout for me based on the single entity I chose, in this case Product. Hit F5 to run the application, select a Product on the search screen to open the Product Details Screen. Notice that it’s pretty simple because my entity is simple. Click the “Customize” button in the top right of the screen so we can start tweaking it. The left side of the screen shows the containership of controls and data bindings (called the content tree) and the right side shows the live preview with data. Notice that we have a simple layout of two rows but only one row is populated (with a vertical stack of controls in this case). The bottom row is empty. You can envision the screen like this: Each container will display a group of data that you select. For instance in the above screen, the top row is set to a vertical stack control and the group of data to display is coming from Product. So when laying out screens you need to think in terms of containers of controls bound to groups of data. To change the data to which a container is bound, select the data item next to the container: You can select the “New Group” item in order to create more containers (or controls) within the current container. For instance to totally control the layout, select the Product in the top row and hit the delete key. This will delete the vertical stack and therefore all the controls on the screen. The content tree will still have two rows, but the rows are now both empty. If you want a layout of four containers (two rows and two columns) then select “New Group” for the data item and then change the vertical stack control to “Two Columns” for both of the rows as shown here: You can keep going on and on by selecting new groups and choosing between rows or columns. Here’s a layout with 8 containers, 4 rows and 2 columns: And here is a layout with 7 content areas; one row across the top of the screen and three rows with two columns below that: When you select Choose Content and select a data item like Product it will populate all the controls within the container (row or column in a vertical stack) however you have complete control on what to display within each group. You can delete fields you don’t want to display and/or change their controls. You can also change the size of controls and how they display by changing the settings in the properties window. If you are in the Screen Designer (and not the customization mode like we are here) you can also drag-drop data items from the left-hand side of the screen to the content tree. Note, however, that not all areas of the tree will allow you to drop a data item if there is a binding already set to a different set of data. For instance you can’t drop a Customer ID into the same group as a Product if they originate from different entities. To get around this, all you need to do is create a new group and content area as shown above. Let’s take a more complex example that deals with more than just product. I want to design a complex screen that displays Products and their Category, as well as all the OrderDetails for which that product is selected. This time I will create a new screen and select List and Details, select the Products screen data, and include the related OrderDetails. However I’m going to totally change the layout so that a Product grid is at the top left and below that is the selected Product detail. Below that will be the Category text fields and image in two columns below. On the right side I want the OrderDetails grid to take up the whole right side of the screen. All this can be done in customization mode while you’re debugging the application. To do this, I first deleted all the content items in the tree and then re-created the content tree as shown in the image below. I also set the image to be larger and the description textbox to be 5 rows using the property window below the live preview. I added the green lines to indicate the containers and show how it maps to the content tree (click to enlarge): I hope this demystifies the screen designer a little bit. Remember that screen templates are excellent starting points – you can take them as-is or customize them further. It takes a little fooling around with customizing screens to get them to do exactly what you want but there are a ton of possibilities once you get the hang of it. Stay tuned for more information on how to create your own screen templates that show up in the “Add New Screen” dialog. Enjoy! The tutorial that might be interested: Adding Custom Control In LightSwitch

    Read the article

  • Part 3 of 4 : Tips/Tricks for Silverlight Developers.

    - by mbcrump
    Part 1 | Part 2 | Part 3 | Part 4 I wanted to create a series of blog post that gets right to the point and is aimed specifically at Silverlight Developers. The most important things I want this series to answer is : What is it?  Why do I care? How do I do it? I hope that you enjoy this series. Let’s get started: Tip/Trick #11) What is it? Underline Text in a TextBlock. Why do I care? I’ve seen people do some crazy things to get underlined text in a Silverlight Application. In case you didn’t know there is a property for that. How do I do it: On a TextBlock you have a property called TextDecorations. You can easily set this property in XAML or CodeBehind with the following snippet: <UserControl x:Class="SilverlightApplication19.MainPage" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" xmlns:d="http://schemas.microsoft.com/expression/blend/2008" xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006" mc:Ignorable="d" d:DesignHeight="300" d:DesignWidth="400"> <Grid x:Name="LayoutRoot" Background="White"> <TextBlock Name="txtTB" Text="MichaelCrump.NET" TextDecorations="Underline" /> </Grid> </UserControl> or you can do it in CodeBehind… txtTB.TextDecorations = TextDecorations.Underline;   Tip/Trick #12) What is it? Get the browser information from a Silverlight Application. Why do I care? This will allow you to program around certain browser conditions that otherwise may not be aware of. How do I do it: It is very easy to extract Browser Information out a Silverlight Application by using the BrowserInformation class. You can copy/paste this code snippet to have access to all of them. string strBrowserName = HtmlPage.BrowserInformation.Name; string strBrowserMinorVersion = HtmlPage.BrowserInformation.BrowserVersion.Minor.ToString(); string strIsCookiesEnabled = HtmlPage.BrowserInformation.CookiesEnabled.ToString(); string strPlatform = HtmlPage.BrowserInformation.Platform; string strProductName = HtmlPage.BrowserInformation.ProductName; string strProductVersion = HtmlPage.BrowserInformation.ProductVersion; string strUserAgent = HtmlPage.BrowserInformation.UserAgent; string strBrowserVersion = HtmlPage.BrowserInformation.BrowserVersion.ToString(); string strBrowserMajorVersion = HtmlPage.BrowserInformation.BrowserVersion.Major.ToString(); Tip/Trick #13) What is it? Always check the minRuntimeVersion after creating a new Silverlight Application. Why do I care? Whenever you create a new Silverlight Application and host it inside of an ASP.net website you will notice Visual Studio generates some code for you as shown below. The minRuntimeVersion value is set by the SDK installed on your system. Be careful, if you are playing with beta’s like “Lightswitch” because you will have a higher version of the SDK installed. So when you create a new Silverlight 4 project and deploy it your customers they will get a prompt telling them they need to upgrade Silverlight. They also will not be able to upgrade to your version because its not released to the public yet. How do I do it: Open up the .aspx or .html file Visual Studio generated and look for the line below. Make sure it matches whatever version you are actually targeting. Tip/Trick #14) What is it? The VisualTreeHelper class provides useful methods for involving nodes in a visual tree. Why do I care? It’s nice to have the ability to “walk” a visual tree or to get the rendered elements of a ListBox. I have it very useful for debugging my Silverlight application. How do I do it: Many examples exist on the web, but say that you have a huge Silverlight application and want to find the parent object of a control.  In the code snippet below, we would get 3 MessageBoxes with (StackPanel first, Grid second and UserControl Third). This is a tiny application, but imagine how helpful this would be on a large project. <UserControl x:Class="SilverlightApplication18.MainPage" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" xmlns:d="http://schemas.microsoft.com/expression/blend/2008" xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006" mc:Ignorable="d" d:DesignHeight="300" d:DesignWidth="400"> <Grid x:Name="LayoutRoot" Background="White"> <StackPanel> <Button Content="Button" Height="23" Name="button1" VerticalAlignment="Top" Width="75" Click="button1_Click" /> </StackPanel> </Grid> </UserControl> private void button1_Click(object sender, RoutedEventArgs e) { DependencyObject obj = button1; while ((obj = VisualTreeHelper.GetParent(obj)) != null) { MessageBox.Show(obj.GetType().ToString()); } } Tip/Trick #15) What is it? Add ChildWindows to your Silverlight Application. Why do I care? ChildWindows are a great way to direct a user attention to a particular part of your application. This could be used when saving or entering data. How do I do it: Right click your Silverlight Application and click Add then New Item. Select Silverlight Child Window as shown below. Add an event and call the ChildWindow with the following snippet below: private void button1_Click(object sender, RoutedEventArgs e) { ChildWindow1 cw = new ChildWindow1(); cw.Show(); } Your main application can still process information but this screen forces the user to select an action before proceeding. The code behind of the ChildWindow will look like the following: namespace SilverlightApplication18 { public partial class ChildWindow1 : ChildWindow { public ChildWindow1() { InitializeComponent(); } private void OKButton_Click(object sender, RoutedEventArgs e) { this.DialogResult = true; //TODO: Add logic to save what the user entered. } private void CancelButton_Click(object sender, RoutedEventArgs e) { this.DialogResult = false; } } } Thanks for reading and please come back for Part 4.  Subscribe to my feed CodeProject

    Read the article

  • What might cause the big overhead of making a HttpWebRequest call?

    - by Dimitri C.
    When I send/receive data using HttpWebRequest (on Silverlight, using the HTTP POST method) in small blocks, I measure the very small throughput of 500 bytes/s over a "localhost" connection. When sending the data in large blocks, I get 2 MB/s, which is some 5000 times faster. Does anyone know what could cause this incredibly big overhead? Update: I did the performance measurement on both Firefox 3.6 and Internet Explorer 7. Both showed similar results. Update: The Silverlight client-side code I use is essentially my own implementation of the WebClient class. The reason I wrote it is because I noticed the same performance problem with WebClient, and I thought that the HttpWebRequest would allow to tweak the performance issue. Regrettably, this did not work. The implementation is as follows: public class HttpCommChannel { public delegate void ResponseArrivedCallback(object requestContext, BinaryDataBuffer response); public HttpCommChannel(ResponseArrivedCallback responseArrivedCallback) { this.responseArrivedCallback = responseArrivedCallback; this.requestSentEvent = new ManualResetEvent(false); this.responseArrivedEvent = new ManualResetEvent(true); } public void MakeRequest(object requestContext, string url, BinaryDataBuffer requestPacket) { responseArrivedEvent.WaitOne(); responseArrivedEvent.Reset(); this.requestMsg = requestPacket; this.requestContext = requestContext; this.webRequest = WebRequest.Create(url) as HttpWebRequest; this.webRequest.AllowReadStreamBuffering = true; this.webRequest.ContentType = "text/plain"; this.webRequest.Method = "POST"; this.webRequest.BeginGetRequestStream(new AsyncCallback(this.GetRequestStreamCallback), null); this.requestSentEvent.WaitOne(); } void GetRequestStreamCallback(IAsyncResult asynchronousResult) { System.IO.Stream postStream = webRequest.EndGetRequestStream(asynchronousResult); postStream.Write(requestMsg.Data, 0, (int)requestMsg.Size); postStream.Close(); requestSentEvent.Set(); webRequest.BeginGetResponse(new AsyncCallback(this.GetResponseCallback), null); } void GetResponseCallback(IAsyncResult asynchronousResult) { HttpWebResponse response = (HttpWebResponse)webRequest.EndGetResponse(asynchronousResult); Stream streamResponse = response.GetResponseStream(); Dim.Ensure(streamResponse.CanRead); byte[] readData = new byte[streamResponse.Length]; Dim.Ensure(streamResponse.Read(readData, 0, (int)streamResponse.Length) == streamResponse.Length); streamResponse.Close(); response.Close(); webRequest = null; responseArrivedEvent.Set(); responseArrivedCallback(requestContext, new BinaryDataBuffer(readData)); } HttpWebRequest webRequest; ManualResetEvent requestSentEvent; BinaryDataBuffer requestMsg; object requestContext; ManualResetEvent responseArrivedEvent; ResponseArrivedCallback responseArrivedCallback; } I use this code to send data back and forth to an HTTP server.

    Read the article

  • Difficulty getting Saxon into XQuery mode instead of XSLT

    - by Rosarch
    I'm having difficulty getting XQuery to work. I downloaded Saxon-HE 9.2. It seems to only want to work with XSLT. When I type: java -jar saxon9he.jar I get back usage information for XSLT. When I use the command syntax for XQuery, it doesn't recognize the parameters (like -q), and gives XSLT usage information. Here are some command line interactions: >java -jar saxon9he.jar No source file name Saxon-HE 9.2.0.6J from Saxonica Usage: see http://www.saxonica.com/documentation/using-xsl/commandline.html Options: -a Use xml-stylesheet PI, not -xsl argument -c:filename Use compiled stylesheet from file -config:filename Use configuration file -cr:classname Use collection URI resolver class -dtd:on|off Validate using DTD -expand:on|off Expand defaults defined in schema/DTD -explain[:filename] Display compiled expression tree -ext:on|off Allow|Disallow external Java functions -im:modename Initial mode -ief:class;class;... List of integrated extension functions -it:template Initial template -l:on|off Line numbering for source document -m:classname Use message receiver class -now:dateTime Set currentDateTime -o:filename Output file or directory -opt:0..10 Set optimization level (0=none, 10=max) -or:classname Use OutputURIResolver class -outval:recover|fatal Handling of validation errors on result document -p:on|off Recognize URI query parameters -r:classname Use URIResolver class -repeat:N Repeat N times for performance measurement -s:filename Initial source document -sa Use schema-aware processing -strip:all|none|ignorable Strip whitespace text nodes -t Display version and timing information -T[:classname] Use TraceListener class -TJ Trace calls to external Java functions -tree:tiny|linked Select tree model -traceout:file|#null Destination for fn:trace() output -u Names are URLs not filenames -val:strict|lax Validate using schema -versionmsg:on|off Warn when using XSLT 1.0 stylesheet -warnings:silent|recover|fatal Handling of recoverable errors -x:classname Use specified SAX parser for source file -xi:on|off Expand XInclude on all documents -xmlversion:1.0|1.1 Version of XML to be handled -xsd:file;file.. Additional schema documents to be loaded -xsdversion:1.0|1.1 Version of XML Schema to be used -xsiloc:on|off Take note of xsi:schemaLocation -xsl:filename Stylesheet file -y:classname Use specified SAX parser for stylesheet --feature:value Set configuration feature (see FeatureKeys) -? Display this message param=value Set stylesheet string parameter +param=filename Set stylesheet document parameter ?param=expression Set stylesheet parameter using XPath !option=value Set serialization option >java -jar saxon9he.jar -q:"..\w3xQueryTut.xq" Unknown option -q:..\w3xQueryTut.xq Saxon-HE 9.2.0.6J from Saxonica Usage: see http://www.saxonica.com/documentation/using-xsl/commandline.html Options: -a Use xml-stylesheet PI, not -xsl argument // etc... >java net.sf.saxon.Query -q:"..\w3xQueryTut.xq" Exception in thread "main" java.lang.NoClassDefFoundError: net/sf/saxon/Query Caused by: java.lang.ClassNotFoundException: net.sf.saxon.Query // etc... Could not find the main class: net.sf.saxon.Query. Program will exit. I'm probably making some stupid mistake. Do you know what it could be?

    Read the article

  • Calculate year for end date: PostgreSQL

    - by Dave Jarvis
    Background Users can pick dates as shown in the following screen shot: Any starting month/day and ending month/day combinations are valid, such as: Mar 22 to Jun 22 Dec 1 to Feb 28 The second combination is difficult (I call it the "tricky date scenario") because the year for the ending month/day is before the year for the starting month/day. That is to say, for the year 1900 (also shown selected in the screen shot above), the full dates would be: Dec 22, 1900 to Feb 28, 1901 Dec 22, 1901 to Feb 28, 1902 ... Dec 22, 2007 to Feb 28, 2008 Dec 22, 2008 to Feb 28, 2009 Problem Writing a SQL statement that selects values from a table with dates that fall between the start month/day and end month/day, regardless of how the start and end days are selected. In other words, this is a year wrapping problem. Inputs The query receives as parameters: Year1, Year2: The full range of years, independent of month/day combination. Month1, Day1: The starting day within the year to gather data. Month2, Day2: The ending day within the year (or the next year) to gather data. Previous Attempt Consider the following MySQL code (that worked): end_year = start_year + greatest( -1 * sign( datediff( date( concat_ws('-', year, end_month, end_day ) ), date( concat_ws('-', year, start_month, start_day ) ) ) ), 0 ) How it works, with respect to the tricky date scenario: Create two dates in the current year. The first date is Dec 22, 1900 and the second date is Feb 28, 1900. Count the difference, in days, between the two dates. If the result is negative, it means the year for the second date must be incremented by 1. In this case: Add 1 to the current year. Create a new end date: Feb 28, 1901. Check to see if the date range for the data falls between the start and calculated end date. If the result is positive, the dates have been provided in chronological order and nothing special needs to be done. This worked in MySQL because the difference in dates would be positive or negative. In PostgreSQL, the equivalent functionality always returns a positive number, regardless of their relative chronological order. Question How should the following (broken) code be rewritten for PostgreSQL to take into consideration the relative chronological order of the starting and ending month/day pairs (with respect to an annual temporal displacement)? SELECT m.amount FROM measurement m WHERE (extract(MONTH FROM m.taken) >= month1 AND extract(DAY FROM m.taken) >= day1) AND (extract(MONTH FROM m.taken) <= month2 AND extract(DAY FROM m.taken) <= day2) Any thoughts, comments, or questions? (The dates are pre-parsed into MM/DD format in PHP. My preference is for a pure PostgreSQL solution, but I am open to suggestions on what might make the problem simpler using PHP.) Versions PostgreSQL 8.4.4 and PHP 5.2.10

    Read the article

  • objective-c Add to/Edit .plist file

    - by Dave
    Does writeToFile:atomically, add data to an existing .plist? Is it possible to modify a value in a .plist ? SHould the app be recompiled to take effect the change? I have a custom .plist list in my app. The structure is as below: <array> <dict> <key>Title</key> <string>MyTitle</string> <key>Measurement</key> <dict> <key>prop1</key> <real>28.86392</real> <key>prop2</key> <real>75.12451</real> </dict> <key>Distance</key> <dict> <key>prop3</key> <real>37.49229</real> <key>prop4</key> <real>58.64502</real> </dict> </dict> </array> The array tag holds multiple items with same structure. I need to add items to the existing .plist. Is that possible? I can write a UIView to do just that, if so. *EDIT - OK, I just tried to writetofile hoping it would atleast overwrite if not add to it. Strangely, the following code selects different path while reading and writing. NSString *rootPath = [NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES) objectAtIndex:0]; The .plist file is in my project. I can read from it. When I write to it, it is creating a .plist file and saving it in my /users/.../library/! Does this make sense to anyone?

    Read the article

  • Drupal 7: File field causes error with Dependable Dropdowns

    - by LoneWolfPR
    I'm building a Form in a module using the Form API. I've had a couple of dependent dropdowns that have been working just fine. The code is as follows: $types = db_query('SELECT * FROM {touchpoints_metric_types}') -> fetchAllKeyed(0, 1); $types = array('0' => '- Select -') + $types; $selectedType = isset($form_state['values']['metrictype']) ? $form_state['values']['metrictype'] : 0; $methods = _get_methods($selectedType); $selectedMethod = isset($form_state['values']['measurementmethod']) ? $form_state['values']['measurementmethod'] : 0; $form['metrictype'] = array( '#type' => 'select', '#title' => t('Metric Type'), '#options' => $types, '#default_value' => $selectedType, '#ajax' => array( 'event' => 'change', 'wrapper' => 'method-wrapper', 'callback' => 'touchpoints_method_callback' ) ); $form['measurementmethod'] = array( '#type' => 'select', '#title' => t('Measurement Method'), '#prefix' => '<div id="method-wrapper">', '#suffix' => '</div>', '#options' => $methods, '#default_value' => $selectedMethod, ); Here are the _get_methods and touchpoints_method_callback functions: function _get_methods($selected) { if ($selected) { $methods = db_query("SELECT * FROM {touchpoints_m_methods} WHERE mt_id=$selected") -> fetchAllKeyed(0, 2); } else { $methods = array(); } $methods = array('0' => "- Select -") + $methods; return $methods; } function touchpoints_method_callback($form, &$form_state) { return $form['measurementmethod']; } This all worked fine until I added a file field to the form. Here is the code I used for that: $form['metricfile'] = array( '#type' => 'file', '#title' => 'Attach a File', ); Now that the file is added if I change the first dropdown it hangs with the 'Please wait' message next to it without ever loading the contents of the second dropdown. I also get the following error in my JavaScript console: "Uncaught TypeError: Object function (a,b){return new p.fn.init(a,b,c)} has no method 'handleError'" What am I doing wrong here?

    Read the article

  • How to check total cache size using a program

    - by user1888541
    so I'm having some trouble creating a program to measure cache size in C. I understand the basic concept of going about this but I'm still having trouble figuring out exactly what I am doing wrong. Basically, I create an array of varying length (going by power of 2s) and access each element in the array and put it in a dummy variable. I go through the array and do this around 1000 times to negate the "noise" that would otherwise occur if I only did it once to get an accurate measurement for time. Then, I look for the size that causes a big jump in access time. Unfortunately, this is where I am having my problem, I don't see this jump using my code and clearly I am doing something wrong. Another thing is that I used /proc/cpuinfo to check the cache and it said the size was 6114 but that was not a power of 2. I was told to go by powers of 2 to figure out the cache can anyone explain why this is? Here is the just of my code...I will post the rest if need be { struct timeval start; struct timeval end; // int n = 1; // change this to test different sizes int array_size = 1048576*n; // I'm trying to check the time "manually" first before creating a loop for the program to do it by itself this is why I have a separate "n" variable to increase the size char x = 0; int i =0, j=0; char *a; a =malloc(sizeof(char) * (array_size)); gettimeofday(&start,NULL); for(i=0; i<1000; i++) { for(j=0; j < array_size; j += 1) { x = a[j]; } } gettimeofday(&end,NULL); int timeTaken = (end.tv_sec * 1000000 + end.tv_usec) - (start.tv_sec *1000000 + start.tv_usec); printf("Time Taken: %d \n", timeTaken); printf("Average: %f \n", (double)timeTaken/((double)array_size); }

    Read the article

  • php: parsing and converting array structure

    - by mwb
    I need to convert one array structure into another array structure. I hope someone will find it worthy their time to show how this could be done in a simple manner. It's a little above my array manipulation skills. The structure we start out with looks like this: $cssoptions = array( array( 'group' => 'Measurements' , 'selector' => '#content' , 'rule' => 'width' , 'value' => '200px' ) // end data set , array( 'group' => 'Measurements' , 'selector' => '#content' , 'rule' => 'margin-right' , 'value' => '20px' ) // end data set , array( 'group' => 'Colors' , 'selector' => '#content' , 'rule' => 'color' , 'value' => '#444' ) // end data set , array( 'group' => 'Measurements' , 'selector' => '.sidebar' , 'rule' => 'margin-top' , 'value' => '10px' ) // end data set ); // END $cssoptions It's a collection of discreet datasets, each consisting of an array that holds two key = value pairs describing a 'css-rule' and a 'css-rule-value'. Further, each dataset holds a key = value pair describing the 'css-selector-group' that the 'css-rule' should blong to, and a key = value pair describing a 'rule-group' that should be used for structuring the rendering of the final css into a neat css code block arranged by the kind of properties they describe (colors, measurement, typography etc..) Now, I need to parse that, and turn it into a new structure, where the: 'rule' => 'rule-name' , 'value' => 'value-string' for each dataset is converted into: 'rule-name' => 'value-string' ..and then placed into a new array structure where all 'rule-name' = 'value-string' pairs should be aggregated under the respective 'selector-values' Like this: '#content' => array( 'width' => '200px' , 'margin-right' => '20px' ) // end selecor block ..and finally all those blocks should be grouped under their respective 'style-groups', creating a final resulting array structure like this: $css => array( 'Measurements' => array( '#content' => array( 'width' => '200px' , 'margin-right' => '20px' ) // end selecor block , '.sidebar' => array( 'margin-top' => '10px' ) // end selector block ) // end rule group , 'Colors' => array( '#content' => array( 'color' => '#444' ) // end selector block ) // end rule group ); // end css

    Read the article

  • Graphics card initialisation problems when booting - requires a "double" boot

    - by DMA57361
    Problem Outline When booting from cold (and my machine is disconnected from main power when off, but leaving it connected doesn't help) the graphics card (single PCI-e card GeForce 460) will not initialise on the first boot, leaving me with the motherboards on-board graphics (which kick in automatically if no PCI-e card is found). However, if I restart the computer - normally I do this by powering it off just after the numlock lights up on the keyboard (ie, just after POST/BIOS and before Windows takes over), wait for the system to whirr down, and power up again - the graphics card will work correctly. Once double-booted in this matter the system seems to work correctly - with no noticeable problems. This is reproducible every time I try to boot - it has been working like this for about a month now. Background Information Sept 2010 - I suffered a hardware malfunction (crashes in Windows and graphics corruption on BIOS screens). By way of spare hardware I determined that replacing the PSU removed the issue, so I replaced the PSU with a brand new one of slightly higher power (460W replaced with 500W). Oct 2010 - The problem resurfaced. I purchased a new graphics card (GeForce 460), which removed the problem. The new graphics card immediately started having the boot initialisation problems mentioned. I presumed there was a motherboard fault all along, but because the system worked once booted, and I was temporarily out of spare money, I left the system alone and continued to use it. Early/Mid Dec 2010 - In the space of 5 days I recieved 3 instances of hard drive corruption (seemlingly fixed by chkdsk and sfc in each case...). Since I was already under the impression the motherboard was faulty, I purchased a new one ASAP, this also required new RAM (as I dropped from 4 slots to 2 and didn't want to drop mem quantity). Past 3-4 weeks - With a brand new PSU, Graphics Card, Motherboard and RAM I'm suffering the problem outlined above. So, what could be causing this and how do I can resolve it? Additional Notes Once double-booted the system seems to work entirely correctly. The graphics card problem has occured on two entirely different motherboards. I do not have the opportunity to test the graphics card in a different computer (I've only the old motherboard, which is dubious, or a really old desktop that still has an AGP port). Under load (ie, modern games for long enough for temperatures to plateau) the system remains stable and performs as expected. The software that came with the new motherboard and SpeenFan both report all voltages and temperatures are within nominal bounds, when idle and when under load. I've looking over the BIOS settings for my motherboard multiple times and can find nothing that helps. This system is configured to run with everything at standard levels - no overclocking. I've tried booting the system with only the mobo and graphics card connected (thinking maybe my new PSU was too weak for the new gfx card, even though it meets the quoted PSU requirements for the card) but the same problem persists (and really if the PSU was weak I'd have problems with the system under load). When the gfx card does not initialise the fan on its cooling unit is running, possibly slower than otherwise - but this measurement is by eye and so unreliable.

    Read the article

  • What was scientifically shown to support productivity when organizing/accessing file and folders?

    - by Tom Wijsman
    I have gathered terabytes of data but it has became a habit to store files and folders to the same folder, that folder could be kind of seen as a Inbox where most files (non-installations) enter my system. This way I end up with a big collections of files that are hard to organize properly, I mostly end up making folders that match their file type but then I still have several gigabytes of data per folder which doesn't make it efficient such that I can productively use the folder. I'd rather do a few clicks than having to search through the files, whether that's by some software product or by looking through the folder. Often the file names themselves are not proper so it would be easier to recognize them if there were few in a folder, rather than thousands of them. Scaling in the structure of directory trees in a computer cluster summarizes this problem as following: The processes of storing and retrieving information are rapidly gaining importance in science as well as society as a whole [1, 2, 3, 4]. A considerable effort is being undertaken, firstly to characterize and describe how publicly available information, for example in the world wide web, is actually organized, and secondly, to design efficient methods to access this information. [1] R. M. Shiffrin and K. B¨orner, Proc. Natl. Acad. Sci. USA 101, 5183 (2004). [2] S. Lawrence, C.L. Giles, Nature 400, 107–109 (1999). [3] R.F.I. Cancho and R.V. Sol, Proc. R. Soc. London, Ser. B 268, 2261 (2001). [4] M. Sigman and G. A. Cecchi, Proc. Natl. Acad. Sci. USA 99, 1742 (2002). It goes further on explaining how the data is usually organized by taking general looks at it, but by looking at the abstract and conclusion it doesn't come with a conclusion or approach which results in a productive organization of a directory hierarchy. So, in essence, this is a problem for which I haven't found a solution yet; and I would love to see a scientific solution to this problem. Upon searching further, I don't seem to find anything useful or free papers that approach this problem so it might be that I'm looking in the wrong place. I've also noted that there are different ways to term this problem, which leads out to different results of papers. Perhaps a paper is out there, but I'm not just using the same terms as that paper uses? They often use more scientific terms. I've once heard a story about an advocate with a laptop which has simply outperformed an advocate with had tons of papers, which shows how proper organization leads to productivity; but that story didn't share details on how the advocate used the laptop or how he had organized his data. But in any case, it was way more useful than how most of us organize our data these days... Advice me how I should organize my data, I'm not looking for suggestions here. I would love to see statistics or scientific measurement approaches that help me confirm that it does help me reach my goal.

    Read the article

  • Linux not buffering block I/O when the device is not "in use" (i.e. mounted)

    - by Radek Hladík
    I am installing new server and I've found an interesting issue. The server is running Fedora 19 (3.11.7-200.fc19.x86_64 kernel) and is supposed to host a few KVM/Qemu virtual servers (mail server, file server, etc..). The HW is Intel(R) Xeon(R) CPU 5160 @ 3.00GHz with 16GB RAM. One of the most important features will be Samba server and we have decided to make it as virtual machine with almost direct access to the disks. So the real HDD is cached on SSD (via bcache) then raided with md and the final device is exported into the virtual machine via virtio. The virtual machine is again Fedora 19 with the same kernel. One important topic to find out is whether the virtualization layer will not introduce high overload into disk I/Os. So far I've been able to get up to 180MB/s in VM and up to 220MB/s on real HW (on the SSD disk). I am still not sure why the overhead is so big but it is more than the network can handle so I do not care so much. The interesting thing is that I've found that the disk reads are not buffered in the VM unless I create and mount FS on the disk or I use the disks somehow. Simply put: Lets do dd to read disk for the first time (the /dev/vdd is an old Raptor disk 70MB/s is its real speed): [root@localhost ~]# dd if=/dev/vdd of=/dev/null bs=256k count=10000 ; cat /proc/meminfo | grep Buffers 2621440000 bytes (2.6 GB) copied, 36.8038 s, 71.2 MB/s Buffers: 14444 kB Rereading the data shows that they are cached somewhere but not in buffers of the VM. Also the speed increased to "only" 500MB/s. The VM has 4GB of RAM (more that the test file) [root@localhost ~]# dd if=/dev/vdd of=/dev/null bs=256k count=10000 ; cat /proc/meminfo | grep Buffers 2621440000 bytes (2.6 GB) copied, 5.16016 s, 508 MB/s Buffers: 14444 kB [root@localhost ~]# dd if=/dev/vdd of=/dev/null bs=256k count=10000 ; cat /proc/meminfo | grep Buffers 2621440000 bytes (2.6 GB) copied, 5.05727 s, 518 MB/s Buffers: 14444 kB Now lets mount the FS on /dev/vdd and try the dd again: [root@localhost ~]# mount /dev/vdd /mnt/tmp [root@localhost ~]# dd if=/dev/vdd of=/dev/null bs=256k count=10000 ; cat /proc/meminfo | grep Buffers 2621440000 bytes (2.6 GB) copied, 4.68578 s, 559 MB/s Buffers: 2574592 kB [root@localhost ~]# dd if=/dev/vdd of=/dev/null bs=256k count=10000 ; cat /proc/meminfo | grep Buffers 2621440000 bytes (2.6 GB) copied, 1.50504 s, 1.7 GB/s Buffers: 2574592 kB While the first read was the same, all 2.6GB got buffered and the next read was at 1.7GB/s. And when I unmount the device: [root@localhost ~]# umount /mnt/tmp [root@localhost ~]# cat /proc/meminfo | grep Buffers Buffers: 14452 kB [root@localhost ~]# dd if=/dev/vdd of=/dev/null bs=256k count=10000 ; cat /proc/meminfo | grep Buffers 2621440000 bytes (2.6 GB) copied, 5.10499 s, 514 MB/s Buffers: 14468 kB The bcache was disabled while testing and the results are same on faster (newer) HDDs and on SSD (except for the initial read speed of course). To sum it up. When I read from the device via dd first time, it gets read from the disk. Next time I reread it gets cached in the host but not in the guest (thats actually the same issue, more on that later). When I mount the filesystem but try to read the device directly it gets cached in VM (via buffers). As soon as I stop "using" it, buffers are discarded and the device is not cached anymore in the VM. When I looked into buffers value on the host I realized that the situation is the same. The block I/O gets buffered only when the disk is in use, in this case it means "exported to a VM". On host, after all the measurement done: 3165552 buffers On the host, after the VM shutdown: 119176 buffers I know it is not important as the disks will be mounted all the time but I am curious and I would like to know why it is working like this.

    Read the article

  • Performance issues when using SSD for a developer notebook (WAMP/LAMP stack)?

    - by András Szepesházi
    I'm a web application developer using my notebook as a standalone development environment (WAMP stack). I just switched from a Core2-duo Vista 32 bit notebook with 2Gb RAM and SATA HDD, to an i5-2520M Win7 64 bit with 4Gb RAM and 128 GB SDD (Corsair P3 128). My initial experience was what I expected, fast boot, quick load of all the applications (Eclipse takes now 5 seconds as opposed to 30s on my old notebook), overall great experience. Then I started to build up my development stack, both as LAMP (using VirtualBox with a debian guest) and WAMP (windows native apache + mysql + php). I wanted to compare those two. This still all worked great out, then I started to pull in my projects to these stacks. And here came the nasty surprise, one of those projects produced a lot worse response times than on my old notebook (that was true for both the VirtualBox and WAMP stack). Apache, php and mysql configurations were practically identical in all environments. I started to do a lot of benchmarking and profiling, and here is what I've found: All general benchmarks (Performance Test 7.0, HDTune Pro, wPrime2 and some more) gave a big advantage to the new notebook. Nothing surprising here. Disc specific tests showed that read/write operations peaked around 380M/160M for the SSD, and all the different sized block operations also performed very well. Started apache performance benchmarking with Apache Benchmark for a small static html file (10 concurrent threads, 500 iterations). Old notebook: min 47ms, median 111ms, max 156ms New WAMP stack: min 71ms, median 135ms, max 296ms New LAMP stack (in VirtualBox): min 6ms, median 46ms, max 175ms Right here I don't get why the native WAMP stack performed so bad, but at least the LAMP environment brought the expected speed. Apache performance measurement for non-cached php content. The php runs a loop of 1000 and generates sha1(uniqid()) inisde. Again, 10 concurrent threads, 500 iterations were used for the benchmark. Old notebook: min 0ms, median 39ms, max 218ms New WAMP stack: min 20ms, median 61ms, max 186ms New LAMP stack (in VirtualBox): min 124ms, median 704ms, max 2463ms What the hell? The new LAMP performed miserably, and even the new native WAMP was outperformed by the old notebook. php + mysql test. The test consists of connecting to a database and reading a single record form a table using INNER JOIN on 3 more (indexed) tables, repeated 100 times within a loop. Databases were identical. 10 concurrent threads, 100 iterations were used for the benchmark. Old notebook: min 1201ms, median 1734ms, max 3728ms New WAMP stack: min 367ms, median 675ms, max 1893ms New LAMP stack (in VirtualBox): min 1410ms, median 3659ms, max 5045ms And the same test with concurrency set to 1 (instead of 10): Old notebook: min 1201ms, median 1261ms, max 1357ms New WAMP stack: min 399ms, median 483ms, max 539ms New LAMP stack (in VirtualBox): min 285ms, median 348ms, max 444ms Strictly for my purposes, as I'm using a self contained development environment (= low concurrency) I could be satisfied with the second test's result. Though I have no idea why the VirtualBox environment performed so bad with higher concurrency. Finally I performed a test of including many php files. The application that I mentioned at the beginning, the one that was performing so bad, has a heavy bootstrap, loads hundreds of small library and configuration files while initializing. So this test does nothing else just includes about 100 files. Concurrency set to 1, 100 iterations: Old notebook: min 140ms, median 168ms, max 406ms New WAMP stack: min 434ms, median 488ms, max 604ms New LAMP stack (in VirtualBox): min 413ms, median 1040ms, max 1921ms Even if I consider that VirtualBox reached those files via shared folders, and that slows things down a bit, I still don't see how could the old notebook outperform so heavily both new configurations. And I think this is the real root of the slow performance, as the application uses even more includes, and the whole bootstrap will occur several times within a page request (for each ajax call, for example). To sum it up, here I am with a brand new high-performance notebook that loads the same page in 20 seconds, that my old notebook can do in 5-7 seconds. Needless to say, I'm not a very happy person right now. Why do you think I experience these poor performance values? What are my options to remedy this situation?

    Read the article

  • Cannot connect to Amazon RDS

    - by Justin
    I have created an Amazon RDS database under the free tier (SQL Server Express, micro instance etc.), but I cannot connect to the server using Microsoft SQL Server Management Studio. I have configured the security group of the database instance (default) to accept my IP address. I am following the connection guide from amazon located here The error I receive is: Cannot connect to databaseName.c***rnqg***v.us-east-1.rds.amazonaws.com,1433. A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections. (provider: TCP Provider, error: 0 - A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond.) (Microsoft SQL Server, Error: 10060) I am using Server type "Database Engine" and using SQL Server Authentication.

    Read the article

  • AWS RDS (SQL Server): SSL Connection - The target principal name is incorrect

    - by AX1
    I have a Amazon Web Services (AWS) Relational Database Service (RDS) instance running SQL Server 2012 Express. I've installed Amazon's aws.amazon.com/rds certificate in the client machine's Trusted Root Certification Authorities store. However, when I connect to the RDS instance (using SQL Server Management Studio 2012) and check off "Encrypt Connection", I get the following error: A connection was successfully established with the server, but then an error occurred during the login process. (provider: SSL Provider, error: 0 - The target principal name is incorrect.) (Microsoft SQL Server) What does this mean, and how can I fix it? Thanks!

    Read the article

  • Windows Update when Group Policy Forbids

    - by David Beckman
    I am in the administrators group for my local Windows XP machine and I would like to get updates via http://update.microsoft.com/[1]. However, this is prevented via the group policy: Network policy settings prevent you from using this website to get updates for your computer. Is there anyway to override this specific policy for my machine or my user? [1] Several installed applications are Microsoft based, but are not part of the machine standard (eg Visual studio). As such, I am not getting the updates for these applications. I could periodically go to the various application sites and look for hotfixes, but that is beyond tedious.

    Read the article

  • Sql Server 2008 - Cannot connect to local default instance

    - by Tone
    I recently rebuilt my machine to Windows 7 x64, installed Sql Server 2008 enterprise. I can connect fine to other remote instances via Management Studio (be they 2000, 2005 or 2008), but i cannot find my local default instance. I have verified that a directory was created for the default instance C:\Program Files\Microsoft SQL Server\MSSQL10.MSSQLSERVER I can connect to SQLEXPRESS instance fine I have rerun setup to ensure I have everything installed I have verified that the SQLSERVER(MSSQLSERVER) service is running I have tried brownsing for the instance and see all the others available except my local one I have tried using this for servername: TSOUTHERLANDPC\MSSQLSERVER, the first part being my local machine name My issue is not the same as this post or this post. Any ideas?

    Read the article

  • What does 1080p mean in laptop resolution?

    - by Brian
    Dell is selling a laptop (Studio 15) where the options for screen resolution are 720p, 900p, and 1080p. What do they mean? I've found a Wikipedia entry that lists the old confusing (UXGA, WUXGA, &c) names and seems to indicate that 1080p might be 1920x1080. It has no information on 900p or 720p. Really, it was bad enough with the WUXGA style descriptions. I think vendors should tell you what they're selling. If you know what the screen resolutions are, I'd appreciate hearing it here.

    Read the article

  • IIS 7.0 Web Deploy authentication fails after changing Windows password... help?

    - by Lucifer Sam
    I have a very basic Windows 2008 R2 Web Server running IIS 7.0. This is just a test/practice server, so I enabled Web Deployment using Windows Authentication. All was well and I was able to deploy easily from VS 2010 using the Administrator account credentials. After changing the Administrator account password, I get the following error when trying to deploy from Visual Studio (using the new password, of course): Error 1 Web deployment task failed... ...An unsupported response was received. The response header 'MSDeploy.Response' was '' but 'v1' was expected. The remote server returned an error: (401) Unauthorized. If I change the Administrator password back to the original one and try to publish using it everything works fine again. So what am I missing? Am I supposed to do something in IIS after changing the password? Thanks!

    Read the article

  • SQL 2008 Database tuning advisor won’t start

    - by Andrew Hancox
    For some reason I can't get DTA to connect to my development machine. It connects to a remote DB just fine but when I point it to my dev machine I get an error saying: Failed to initialize MSDB database for tuning (exit code: -1073741819). I'm pretty sure it's not a permissions issue since I've used profiler to capture what it's doing and all of the commands it's run so far look fine and are being run under my account which is associated with the sysadmin role, when I run them in sql management studio they go through fine. I'm pretty convinced that the problem is related to creating the objects in MSDB that are used by DTA but I tried creating these manually (I found scripts on the web) and it just seems to push the problem along the line slightly. I'm going out of my mind - have even tried reinstalling SQL but that's not fixed it. I'm using SQL 2008 with SP1 (10.0.2531) on windows server 2008 (patched up to date). SAVE ME!!!!!

    Read the article

  • Login failed--password of account must be changed--error 18488

    - by Bill Paetzke
    I failed to connect to a production SQL server. My administrator reset my password, and told me what it was. SQL Server Management Studio gives me this error: Login failed for user 'Bill'. Reason: The password of the account must be changed. (Microsoft SQL Server, Error: 18488) So, how can I reset my password? I tried terminaling into the server with this account, but it said that account doesn't exist. So I guess it's not a regular server account--just SQL server. (if that helps)

    Read the article

  • Unidentified Connection in Compaq CQ61Windows7 OS on DSL cable network connection

    - by Mohammed Thouseef
    I have Dell studio with shared DSL internet connection my router is "NETGEAR DG834" & Network connection discription is "Broadcom netlink tm gigabit ethernet" this connection is working fine with dell laptop with Vista homepremium OS. My Question is? If i connect HPcompaq presario CQ61 in this connection it will automatically select Public network and display unidentified connection i tried several times but not suceeded its OS is Win7 home premium. but wireless networking is working fine only this problem is faceing with DSL cable connection when i diagnos the connection an error will generate "Local Area Connection does not have a valid IP configuration" in LAN Connection status window i noticed IPV4 & IPV6 Connectivity is showing "No Network Access" please can you clarify whats the problem.

    Read the article

  • Editing true type fonts

    - by Parsa
    In typical Persian fonts which are True Type, there is a historical problem with yeh and kafs. These fonts are created for Windows 98, which didn't include full Persian support, and now, we have 2 kind of Kafs: Keheh(0x6a9, ?), and Arabic Kaf(0x643, ?), and 2 kind of Yehs: Farsi Yeh(0x6cc, ?), and Arabic Yeh(0x64a, ?). Old fonts use Arabic ones, but the standard keyboard for Persian uses the Persian ones of course. Is it possible to edit and fix these fonts? I've made many attempts to replace these characters with FontLab Studio, which I failed. Any suggestions?

    Read the article

< Previous Page | 400 401 402 403 404 405 406 407 408 409 410 411  | Next Page >