Search Results

Search found 22308 results on 893 pages for 'floating point'.

Page 474/893 | < Previous Page | 470 471 472 473 474 475 476 477 478 479 480 481  | Next Page >

  • New to iPhone SDK: touchesBegan not called

    - by coriolan
    Hi, I created a very very basic iPhone app with File/New Projet/View-Based application. No NIB file there. Here is my appDelegate .h @interface MyAppDelegate : NSObject <UIApplicationDelegate> { UIWindow *window; MyViewController *viewController; } .m - (void)applicationDidFinishLaunching:(UIApplication *)application { // Override point for customization after app launch [window addSubview:viewController.view]; [window makeKeyAndVisible]; } And here is my loadView method in my controller - (void)loadView { CGRect mainFrame = [[UIScreen mainScreen] applicationFrame]; UIView *contentView = [[UIView alloc] initWithFrame:mainFrame]; contentView.backgroundColor = [UIColor redColor]; self.view = contentView; [contentView release]; } Now, in order to catch the touchesBegan event, I created a new subclass of UIView: .h @interface TouchView : UIView { } .m - (void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event { NSLog(@"Touch detected"); } and modified the second line in my loadView into this : TouchView *contentView = [[UIView alloc] initWithFrame:mainFrame]; Why is touchesBegan never called?

    Read the article

  • Using Image Source with big images in WPF

    - by xyzzer
    I am working on an application that allows users to manipulate multiple images by using ItemsControl. I started running some tests and found that the app has problems displaying some big images - ie. it did not work with the high resolution (21600x10800), 20MB images from http://earthobservatory.nasa.gov/Features/BlueMarble/BlueMarble_monthlies.php, though it displays the 6200x6200, 60MB Hubble telescope image from http://zebu.uoregon.edu/hudf/hudf.jpg just fine. The original solution just specified an Image control with a Source property pointing at a file on a disk (through a binding). With the Blue Marble file - the image would just not show up. Now this could be just a bug hidden somewhere deep in the funky MVVM + XAML implementation - the visual tree displayed by Snoop goes like: Window/Border/AdornerDecorator/ContentPresenter/Grid/Canvas/UserControl/Border/ContentPresenter/Grid/Grid/Grid/Grid/Border/Grid/ContentPresenter/UserControl/UserControl/Border/ContentPresenter/Grid/Grid/Grid/Grid/Viewbox/ContainerVisual/UserControl/Border/ContentPresenter/Grid/Grid/ItemsControl/Border/ItemsPresenter/Canvas/ContentPresenter/Grid/Grid/ContentPresenter/Image... Now debug this! WPF can be crazy like that... Anyway, it turned out that if I create a simple WPF application - the images load just fine. I tried finding out the root cause, but I don't want to spend weeks on it. I figured the right thing to do might be to use a converter to scale the images down - this is what I have done: ImagePath = @"F:\Astronomical\world.200402.3x21600x10800.jpg"; TargetWidth = 2800; TargetHeight = 1866; and <Image> <Image.Source> <MultiBinding Converter="{StaticResource imageResizingConverter}"> <MultiBinding.Bindings> <Binding Path="ImagePath"/> <Binding RelativeSource="{RelativeSource Self}" /> <Binding Path="TargetWidth"/> <Binding Path="TargetHeight"/> </MultiBinding.Bindings> </MultiBinding> </Image.Source> </Image> and public class ImageResizingConverter : MarkupExtension, IMultiValueConverter { public Image TargetImage { get; set; } public string SourcePath { get; set; } public int DecodeWidth { get; set; } public int DecodeHeight { get; set; } public object Convert(object[] values, Type targetType, object parameter, CultureInfo culture) { this.SourcePath = values[0].ToString(); this.TargetImage = (Image)values[1]; this.DecodeWidth = (int)values[2]; this.DecodeHeight = (int)values[3]; return DecodeImage(); } private BitmapImage DecodeImage() { BitmapImage bi = new BitmapImage(); bi.BeginInit(); bi.DecodePixelWidth = (int)DecodeWidth; bi.DecodePixelHeight = (int)DecodeHeight; bi.UriSource = new Uri(SourcePath); bi.EndInit(); return bi; } public object[] ConvertBack(object value, Type[] targetTypes, object parameter, CultureInfo culture) { throw new Exception("The method or operation is not implemented."); } public override object ProvideValue(IServiceProvider serviceProvider) { return this; } } Now this works fine, except for one "little" problem. When you just specify a file path in Image.Source - the application actually uses less memory and works faster than if you use BitmapImage.DecodePixelWidth. Plus with Image.Source if you have multiple Image controls that point to the same image - they only use as much memory as if only one image was loaded. With the BitmapImage.DecodePixelWidth solution - each additional Image control uses more memory and each of them uses more than when just specifying Image.Source. Perhaps WPF somehow caches these images in compressed form while if you specify the decoded dimensions - it feels like you get an uncompressed image in memory, plus it takes 6 times the time (perhaps without it the scaling is done on the GPU?), plus it feels like the original high resolution image also gets loaded and takes up space. If I just scale the image down, save it to a temporary file and then use Image.Source to point at the file - it will probably work, but it will be pretty slow and it will require handling cleanup of the temporary file. If I could detect an image that does not get loaded properly - maybe I could only scale it down if I need to, but Image.ImageFailed never gets triggered. Maybe it has something to do with the video memory and this app just using more of it with the deep visual tree, opacity masks etc. Actual question: How can I load big images as quickly as Image.Source option does it, without using more memory for additional copies and additional memory for the scaled down image if I only need them at a certain resolution lower than original? Also, I don't want to keep them in memory if no Image control is using them anymore.

    Read the article

  • Unit testing a database connection and general questions on database-dependent code and unit testing

    - by dotnetdev
    Hi, If I have a method which establishes a database connection, how could this method be tested? Returning a bool in the event of a successful connection is one way, but is that the best way? From a testability method, is it best to have the connection method as one method and the method to get data back a seperate method? Also, how would I test methods which get back data from a database? I may do an assert against expected data but the actual data can change and still be the right resultset. EDIT: For the last point, to check data, if it's supposed to be a list of cars, then I can check they are real car models. Or if they are a bunch of web servers, I can have a list of existant web servers on the system, return that from the code under test, and get the test result. If the results are different, the data is the issue but the query not? THnaks

    Read the article

  • SharePoint 2010 FBA with custom form - 403 error

    - by Chris R Chapman
    I have a SharePoint 2010 site that is configured for Forms Based Auth using custom role, membership and profile providers. This works perfectly using the OOTB SharePoint 2010 FBA form (ie. under /_forms in the web app virtual directory). My problem is with a custom login form that is located in a separate folder, /Landing/Login/default.aspx. I've configured the web app to point to this form (contains an unmodified ASP.NET login control), which is rendered when the user hits the root URL. The problem comes when they submit their credentials and the form posts back for the redirection to /_layouts/Authenticate.aspx. It stops cold with a 403. If I revert back to the OOTB FBA form (using the same providers) everything works fine. Any suggestions on what could be going wrong?

    Read the article

  • moq SetupSet is obsolete. In place of what?

    - by jkohlhepp
    Let's say I want to use moq to create a callback on a setter to store the set property in my own field for later use. (Contrived example - but it gets to the point of the question.) I could do something like this: myMock.SetupSet(x => x.MyProperty).Callback(value => myLocalVariable = value); And that works just fine. However, SetupSet is obsolete according to Intellisense. But it doesn't say what should be used as an alternative. I know that moq provides SetupProperty which will autowire the property with a backing field. But that is not what I'm looking for. I want to capture the set value into my own variable. How should I do this using non-obsolete methods?

    Read the article

  • C# How to check if an FTP Directory Exists

    - by Billy Logan
    Hello Everyone, Looking for the best way to check for a given directory via FTP. currently i have the following code: private bool FtpDirectoryExists(string directory, string username, string password) { try { var request = (FtpWebRequest)WebRequest.Create(directory); request.Credentials = new NetworkCredential(username, password); request.Method = WebRequestMethods.Ftp.GetDateTimestamp; FtpWebResponse response = (FtpWebResponse)request.GetResponse(); } catch (WebException ex) { FtpWebResponse response = (FtpWebResponse)ex.Response; if (response.StatusCode == FtpStatusCode.ActionNotTakenFileUnavailable) return false; else return true; } return true; } This returns false whether the directory is there or not. Can someone point me in the right direction. Thanks in advance, Billy

    Read the article

  • Please explain some of Paul Graham's points on LISP

    - by kunjaan
    I need some help understanding some of the points from Paul Graham's article http://www.paulgraham.com/diff.html A new concept of variables. In Lisp, all variables are effectively pointers. Values are what have types, not variables, and assigning or binding variables means copying pointers, not what they point to. A symbol type. Symbols differ from strings in that you can test equality by comparing a pointer. A notation for code using trees of symbols. The whole language always available. There is no real distinction between read-time, compile-time, and runtime. You can compile or run code while reading, read or run code while compiling, and read or compile code at runtime. What do these points mean How are they different in languages like C or Java? Do any other languages other than LISP family languages have any of these constructs now?

    Read the article

  • What is the meaning of NSXMLParserErrorDomain error 5.?

    - by mobibob
    Ok, I am back on this task. I have my XML properly download from my webserver with a URL pointing to the server's file, however, when I detect the network is 'unreachable' I simply point the URL to my application's local XML and I get the following error (N.B. the file is a direct copy of the one on the server). I cannot find detail description, but I think it is saying that the URL is pointing to an inaccessible location. Am I storing this resource in the wrong location? I think I want it in the HomeDirectory / Library?? Debug output loadMyXml: /var/mobile/Applications/950569B0-6113-48FC-A184-4F1B67A0510F/MyApp.app/SampleHtml.xml 2009-10-14 22:08:17.257 MyApp[288:207] Wah! It didn't work. Error Domain=NSXMLParserErrorDomain Code=5 "Operation could not be completed. (NSXMLParserErrorDomain error 5.)" 2009-10-14 22:08:17.270 MyApp[288:207] Operation could not be completed. (NSXMLParserErrorDomain error 5.)

    Read the article

  • How can I set initial values when using Silverlight DataForm and .Net RIA Services DomainDataSource?

    - by TheDuke
    I'm experimenting with .Net RIA and Silverlight, I have a few of related entities; Client, Project and Job, a Client has many Projects, and a Project has many Jobs. In the Silverlight app, I'm uisng a DomainDataSource, and DataForm controls to perform the CRUD operations. When a Client is selected a list of projects appears, at which point the user can add a new project for that client. I'd like to be able to fill in the value for client automatically, but there doesn't seem to be any way to do that, while there is an AddingNewItem event on the DataForm control, it seems to fire before the DataForm has an instance of the new object and I'm not sure trawling through the ChangeSet from the DomainDataSource SubmittingChanges event is the best way to do this. I would of thought this would of been an obvious feature... anyone know the best way to achieve this functionality?

    Read the article

  • Pass Dictionary of routeValues to ActionLink

    - by Graham
    All, Getting to grips with ASP.NET MVC. So far, so good, but this one is a little nuts. I have a view model that contains a dictionary of attributes for a hyperlink, used like this: menu = model variable Html.ActionLink(Html.Encode(menu.Name), Html.Encode(menu.Action), Html.Encode(menu.Controller), menu.Attributes, null) The problem is the position of "menu.Attributes" expects an object in the form: new { Name = "Fred", Age=24 } From what I can tell, this anonymous object is actually converted to a dictionary via reflection anyway BUT you can't pass a dictionary to it in the first place!!! The Html generated for the link simply shows the dictionary type. How on earth do I get round this? The whole point is that its general and the controller can have set the menu.Attributes previously....

    Read the article

  • Overriding Equals method in Structs

    - by devoured elysium
    I've looked for overriding guidelines for structs, but all I can find is for classes. At first I thought I wouldn't have to check to see if the passed object was null, as structs are value types and can't be null. But now that I come to think of it, as equals signature is public bool Equals(object obj) it seems there is nothing preventing the user of my struct to be trying to compare it with an arbitrary reference type. My second point concerns the casting I (think I) have to make before I compare my private fields in my struct. How am I supposed to cast the object to my struct's type? C#'s as keyword seems only suitable for reference types. Thanks

    Read the article

  • OAuth in C# as a client

    - by Redth
    Hi, I've been given 6 bits of information to access some data from a website: Website Json Url (eg: http://somesite.com/items/list.json) OAuth Authorization Url (eg: http://somesite.com/oauth/authorization) OAuth Request Url (eg: http://somesite.com/oauth/request) OAuth Access Url (eg: http://somesite.com/oauth/access) Client Key (eg: 12345678) Client Secret (eg: abcdefghijklmnop) Now, I've looked at DotNetOpenAuth and OAuth.NET libraries, and while I'm sure they are very capable of doing what I need, I just can't figure out how to use either in this way. Could someone post some sample code of how to consume the Url (Point 1.) in either library (or any other way that may work just as well)? Thanks!

    Read the article

  • Winforms role based security limitations

    - by muhan
    I'm implementing role based security using Microsoft's membership and role provider. The theoretical problem I'm having is that you implement a specific role on a method such as: [PrincipalPermissionAttribute(SecurityAction.Demand, Role="Supervisor")] private void someMethod() {} What if at some point down the road, I don't want Supervisors to access someMethod() anymore? Wouldn't I have to change the source code to make that change? Am I missing something? It seems there has to be some way to abstract the relationship between the supervisors role and the method so I can create a way in the application to change this coupling of role permission to method. Any insight or direction would be appreciated. Thank you.

    Read the article

  • SQL Monitor’s data repository: Alerts

    - by Chris Lambrou
    In my previous post, I introduced the SQL Monitor data repository, and described how the monitored objects are stored in a hierarchy in the data schema, in a series of tables with a _Keys suffix. In this post I had planned to describe how the actual data for the monitored objects is stored in corresponding tables with _StableSamples and _UnstableSamples suffixes. However, I’m going to postpone that until my next post, as I’ve had a request from a SQL Monitor user to explain how alerts are stored. In the SQL Monitor data repository, alerts are stored in tables belonging to the alert schema, which contains the following five tables: alert.Alert alert.Alert_Cleared alert.Alert_Comment alert.Alert_Severity alert.Alert_Type In this post, I’m only going to cover the alert.Alert and alert.Alert_Type tables. I may cover the other three tables in a later post. The most important table in this schema is alert.Alert, as each row in this table corresponds to a single alert. So let’s have a look at it. SELECT TOP 100 AlertId, AlertType, TargetObject, [Read], SubType FROM alert.Alert ORDER BY AlertId DESC;  AlertIdAlertTypeTargetObjectReadSubType 165550397:Cluster,1,4:Name,s29:srp-mr03.testnet.red-gate.com,9:SqlServer,1,4:Name,s0:,10 265549387:Cluster,1,4:Name,s29:srp-mr03.testnet.red-gate.com,7:Machine,1,4:Name,s0:,10 365548187:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s15:FavouriteThings,00 465547157:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s15:FavouriteThings,00 565546147:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s15:FavouriteThings,00 665545187:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s14:SqlMonitorData,00 765544157:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s14:SqlMonitorData,00 865543147:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s14:SqlMonitorData,00 965542187:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s4:msdb,00 1065541147:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s4:msdb,00 11…     So what are we seeing here, then? Well, AlertId is an auto-incrementing identity column, so ORDER BY AlertId DESC ensures that we see the most recent alerts first. AlertType indicates the type of each alert, such as Job failed (6), Backup overdue (14) or Long-running query (12). The TargetObject column indicates which monitored object the alert is associated with. The Read column acts as a flag to indicate whether or not the alert has been read. And finally the SubType column is used in the case of a Custom metric (40) alert, to indicate which custom metric the alert pertains to. Okay, now lets look at some of those columns in more detail. The AlertType column is an easy one to start with, and it brings use nicely to the next table, data.Alert_Type. Let’s have a look at what’s in this table: SELECT AlertType, Event, Monitoring, Name, Description FROM alert.Alert_Type ORDER BY AlertType;  AlertTypeEventMonitoringNameDescription 1100Processor utilizationProcessor utilization (CPU) on a host machine stays above a threshold percentage for longer than a specified duration 2210SQL Server error log entryAn error is written to the SQL Server error log with a severity level above a specified value. 3310Cluster failoverThe active cluster node fails, causing the SQL Server instance to switch nodes. 4410DeadlockSQL deadlock occurs. 5500Processor under-utilizationProcessor utilization (CPU) on a host machine remains below a threshold percentage for longer than a specified duration 6610Job failedA job does not complete successfully (the job returns an error code). 7700Machine unreachableHost machine (Windows server) cannot be contacted on the network. 8800SQL Server instance unreachableThe SQL Server instance is not running or cannot be contacted on the network. 9900Disk spaceDisk space used on a logical disk drive is above a defined threshold for longer than a specified duration. 101000Physical memoryPhysical memory (RAM) used on the host machine stays above a threshold percentage for longer than a specified duration. 111100Blocked processSQL process is blocked for longer than a specified duration. 121200Long-running queryA SQL query runs for longer than a specified duration. 131400Backup overdueNo full backup exists, or the last full backup is older than a specified time. 141500Log backup overdueNo log backup exists, or the last log backup is older than a specified time. 151600Database unavailableDatabase changes from Online to any other state. 161700Page verificationTorn Page Detection or Page Checksum is not enabled for a database. 171800Integrity check overdueNo entry for an integrity check (DBCC DBINFO returns no date for dbi_dbccLastKnownGood field), or the last check is older than a specified time. 181900Fragmented indexesFragmentation level of one or more indexes is above a threshold percentage. 192400Job duration unusualThe duration of a SQL job duration deviates from its baseline duration by more than a threshold percentage. 202501Clock skewSystem clock time on the Base Monitor computer differs from the system clock time on a monitored SQL Server host machine by a specified number of seconds. 212700SQL Server Agent Service statusThe SQL Server Agent Service status matches the status specified. 222800SQL Server Reporting Service statusThe SQL Server Reporting Service status matches the status specified. 232900SQL Server Full Text Search Service statusThe SQL Server Full Text Search Service status matches the status specified. 243000SQL Server Analysis Service statusThe SQL Server Analysis Service status matches the status specified. 253100SQL Server Integration Service statusThe SQL Server Integration Service status matches the status specified. 263300SQL Server Browser Service statusThe SQL Server Browser Service status matches the status specified. 273400SQL Server VSS Writer Service statusThe SQL Server VSS Writer status matches the status specified. 283501Deadlock trace flag disabledThe monitored SQL Server’s trace flag cannot be enabled. 293600Monitoring stopped (host machine credentials)SQL Monitor cannot contact the host machine because authentication failed. 303700Monitoring stopped (SQL Server credentials)SQL Monitor cannot contact the SQL Server instance because authentication failed. 313800Monitoring error (host machine data collection)SQL Monitor cannot collect data from the host machine. 323900Monitoring error (SQL Server data collection)SQL Monitor cannot collect data from the SQL Server instance. 334000Custom metricThe custom metric value has passed an alert threshold. 344100Custom metric collection errorSQL Monitor cannot collect custom metric data from the target object. Basically, alert.Alert_Type is just a big reference table containing information about the 34 different alert types supported by SQL Monitor (note that the largest id is 41, not 34 – some alert types have been retired since SQL Monitor was first developed). The Name and Description columns are self evident, and I’m going to skip over the Event and Monitoring columns as they’re not very interesting. The AlertId column is the primary key, and is referenced by AlertId in the alert.Alert table. As such, we can rewrite our earlier query to join these two tables, in order to provide a more readable view of the alerts: SELECT TOP 100 AlertId, Name, TargetObject, [Read], SubType FROM alert.Alert a JOIN alert.Alert_Type at ON a.AlertType = at.AlertType ORDER BY AlertId DESC;  AlertIdNameTargetObjectReadSubType 165550Monitoring error (SQL Server data collection)7:Cluster,1,4:Name,s29:srp-mr03.testnet.red-gate.com,9:SqlServer,1,4:Name,s0:,00 265549Monitoring error (host machine data collection)7:Cluster,1,4:Name,s29:srp-mr03.testnet.red-gate.com,7:Machine,1,4:Name,s0:,00 365548Integrity check overdue7:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s15:FavouriteThings,00 465547Log backup overdue7:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s15:FavouriteThings,00 565546Backup overdue7:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s15:FavouriteThings,00 665545Integrity check overdue7:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s14:SqlMonitorData,00 765544Log backup overdue7:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s14:SqlMonitorData,00 865543Backup overdue7:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s14:SqlMonitorData,00 965542Integrity check overdue7:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s4:msdb,00 1065541Backup overdue7:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s4:msdb,00 Okay, the next column to discuss in the alert.Alert table is TargetObject. Oh boy, this one’s a bit tricky! The TargetObject of an alert is a serialized string representation of the position in the monitored object hierarchy of the object to which the alert pertains. The serialization format is somewhat convenient for parsing in the C# source code of SQL Monitor, and has some helpful characteristics, but it’s probably very awkward to manipulate in T-SQL. I could document the serialization format here, but it would be very dry reading, so perhaps it’s best to consider an example from the table above. Have a look at the alert with an AlertID of 65543. It’s a Backup overdue alert for the SqlMonitorData database running on the default instance of granger, my laptop. Each different alert type is associated with a specific type of monitored object in the object hierarchy (I described the hierarchy in my previous post). The Backup overdue alert is associated with databases, whose position in the object hierarchy is root → Cluster → SqlServer → Database. The TargetObject value identifies the target object by specifying the key properties at each level in the hierarchy, thus: Cluster: Name = "granger" SqlServer: Name = "" (an empty string, denoting the default instance) Database: Name = "SqlMonitorData" Well, look at the actual TargetObject value for this alert: "7:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s14:SqlMonitorData,". It is indeed composed of three parts, one for each level in the hierarchy: Cluster: "7:Cluster,1,4:Name,s7:granger," SqlServer: "9:SqlServer,1,4:Name,s0:," Database: "8:Database,1,4:Name,s14:SqlMonitorData," Each part is handled in exactly the same way, so let’s concentrate on the first part, "7:Cluster,1,4:Name,s7:granger,". It comprises the following: "7:Cluster," – This identifies the level in the hierarchy. "1," – This indicates how many different key properties there are to uniquely identify a cluster (we saw in my last post that each cluster is identified by a single property, its Name). "4:Name,s14:SqlMonitorData," – This represents the Name property, and its corresponding value, SqlMonitorData. It’s split up like this: "4:Name," – Indicates the name of the key property. "s" – Indicates the type of the key property, in this case, it’s a string. "14:SqlMonitorData," – Indicates the value of the property. At this point, you might be wondering about the format of some of these strings. Why is the string "Cluster" stored as "7:Cluster,"? Well an encoding scheme is used, which consists of the following: "7" – This is the length of the string "Cluster" ":" – This is a delimiter between the length of the string and the actual string’s contents. "Cluster" – This is the string itself. 7 characters. "," – This is a final terminating character that indicates the end of the encoded string. You can see that "4:Name,", "8:Database," and "14:SqlMonitorData," also conform to the same encoding scheme. In the example above, the "s" character is used to indicate that the value of the Name property is a string. If you explore the TargetObject property of alerts in your own SQL Monitor data repository, you might find other characters used for other non-string key property values. The different value types you might possibly encounter are as follows: "I" – Denotes a bigint value. For example, "I65432,". "g" – Denotes a GUID value. For example, "g32116732-63ae-4ab5-bd34-7dfdfb084c18,". "d" – Denotes a datetime value. For example, "d634815384796832438,". The value is stored as a bigint, rather than a native SQL datetime value. I’ll describe how datetime values are handled in the SQL Monitor data repostory in a future post. I suggest you have a look at the alerts in your own SQL Monitor data repository for further examples, so you can see how the TargetObject values are composed for each of the different types of alert. Let me give one further example, though, that represents a Custom metric alert, as this will help in describing the final column of interest in the alert.Alert table, SubType. Let me show you the alert I’m interested in: SELECT AlertId, a.AlertType, Name, TargetObject, [Read], SubType FROM alert.Alert a JOIN alert.Alert_Type at ON a.AlertType = at.AlertType WHERE AlertId = 65769;  AlertIdAlertTypeNameTargetObjectReadSubType 16576940Custom metric7:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s6:master,12:CustomMetric,1,8:MetricId,I2,02 An AlertType value of 40 corresponds to the Custom metric alert type. The Name taken from the alert.Alert_Type table is simply Custom metric, but this doesn’t tell us anything about the specific custom metric that this alert pertains to. That’s where the SubType value comes in. For custom metric alerts, this provides us with the Id of the specific custom alert definition that can be found in the settings.CustomAlertDefinitions table. I don’t really want to delve into custom alert definitions yet (maybe in a later post), but an extra join in the previous query shows us that this alert pertains to the CPU pressure (avg runnable task count) custom metric alert. SELECT AlertId, a.AlertType, at.Name, cad.Name AS CustomAlertName, TargetObject, [Read], SubType FROM alert.Alert a JOIN alert.Alert_Type at ON a.AlertType = at.AlertType JOIN settings.CustomAlertDefinitions cad ON a.SubType = cad.Id WHERE AlertId = 65769;  AlertIdAlertTypeNameCustomAlertNameTargetObjectReadSubType 16576940Custom metricCPU pressure (avg runnable task count)7:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s6:master,12:CustomMetric,1,8:MetricId,I2,02 The TargetObject value in this case breaks down like this: "7:Cluster,1,4:Name,s7:granger," – Cluster named "granger". "9:SqlServer,1,4:Name,s0:," – SqlServer named "" (the default instance). "8:Database,1,4:Name,s6:master," – Database named "master". "12:CustomMetric,1,8:MetricId,I2," – Custom metric with an Id of 2. Note that the hierarchy for a custom metric is slightly different compared to the earlier Backup overdue alert. It’s root → Cluster → SqlServer → Database → CustomMetric. Also notice that, unlike Cluster, SqlServer and Database, the key property for CustomMetric is called MetricId (not Name), and the value is a bigint (not a string). Finally, delving into the custom metric tables is beyond the scope of this post, but for the sake of avoiding any future confusion, I’d like to point out that whilst the SubType references a custom alert definition, the MetricID value embedded in the TargetObject value references a custom metric definition. Although in this case both the custom metric definition and custom alert definition share the same Id value of 2, this is not generally the case. Okay, that’s enough for now, not least because as I’m typing this, it’s almost 2am, I have to go to work tomorrow, and my alarm is set for 6am – eek! In my next post, I’ll either cover the remaining three tables in the alert schema, or I’ll delve into the way SQL Monitor stores its monitoring data, as I’d originally planned to cover in this post.

    Read the article

  • Initializing links in jqtouch in AJAX loaded content

    - by Cody Caughlan
    I have an iphone web app built in jqtouch, when content is loaded by links ("a" tags) then any content on the next page is parsed by jqtouch and any links are initialized. However, when I load content via an AJAX call and append() it to an element then any links in that content are NOT initialized by jqtouch. Thus any clicks on those links are full-blown clicks to a new resource and are not handled by jqtouch, so at that point you've effectively broken out of jqtouch. My AJAX code is: #data <script type="text/javascript"> $.ajax({ url: '/mobile/nearby-accounts', type: 'GET', dataType: 'html', data: {lat: lat, lng: lng}, success: function(html) { $('#data').empty().append(html); // Is there some method I call on jqtouch to have any links in $('#data') be hooked up to the jqtouch lifecycle? } </script> Thanks in advance.

    Read the article

  • Mavericks system Ruby and gem broken

    - by T1000
    When I tried to run ruby -v or gem -v (or any other command), I get: dyld: lazy symbol binding failed: Symbol not found: _ruby_run Referenced from: /usr/local/bin/ruby Expected in: /usr/lib/libruby.dylib dyld: Symbol not found: _ruby_run Referenced from: /usr/local/bin/ruby Expected in: /usr/lib/libruby.dylib This is after I ran rvm system to temporally switch to the system default Ruby. RVM is working fine, but I have a special need to install a gem to the system Ruby and I can't because of this problem. Does anyone know why? It seems to be some kind of link problem to Ruby, but I'm don't know how to solve this. I ran which ruby and it's at this point located in "/usr/local/bin/ruby". I checked the Ruby in "/usr/lib/" and it's pointing to my system Ruby: "../../System/Library/Frameworks/Ruby.framework/Versions/Current/usr/lib/ruby" Any help would be appreciated.

    Read the article

  • When querying the Win32_NTLogEvent Class from WMI with WQL is the TimeGenerated property based on Lo

    - by jpmcclung
    I am writing a C# windows service that is doing some churning through the eventlog on a few domain controllers. Some of them are Windows Server 2003 and some are Windows Server 2008. Upon the service stopping I am attempting to resume where I left off in the logs. In order to do this instead of SELECT * FROM Win32_NTLogEvent WHERE --criteria for events I am looking for I am doing SELECT * FROM Win32_NTLogEvent WHERE TimeGenerated = --some date AND --criteria for events I am looking for At one point I was convinced that the TimeGenerated field was in the local time of the server but now it seems that the Windows 2008 Servers are using GMT to record that time. Can anyone shed some light on if this is a real different between the way the two operating systems function or is this a configuration problem?

    Read the article

  • Improving ANTLR DSL parse-error messages

    - by Dan Fabulich
    I'm working on a domain-specific language (DSL) for non-programmers. Non-programmers make a lot of grammar mistakes: they misspell keywords, they don't close parentheses, they don't terminate blocks, etc. I'm using ANTLR to generate my parser; it provides a nifty mechanism for handling RecognitionExceptions to improve error handling. But I'm finding it pretty hard to develop good error-handling code for my DSL. At this point, I'm considering ways to simplify the language to make it easier for me to provide users with high-quality error messages, but I'm not really sure how to go about this. I think I want to reduce the ambiguity of errors somehow, but I'm not sure how to implement that idea in a grammar. In what ways can I simplify my language to improve parse-error messages for my users? EDIT: Updated to clarify that I'm interested in ways to simplify my language, not just ANTLR error-handling tips in general. (Though, thanks for those!)

    Read the article

  • RIA Services versus WCF services: what is a difference

    - by Budda
    There are a lot of information how to build Silverlight application using .NET RIA services, but it isn't clear what is unique thing in RIA that is absent in WCF? Here are few topics that are talking around this topic: http://stackoverflow.com/questions/1647225/ria-services-versus-wcf-services http://stackoverflow.com/questions/945123/net-ria-services-wcf-services But they doesn't give an answer to the question. Sorry for the stupid question, but what does "RIA Services" layer bring into your app if you already have "Silverlight <-- WCF Service <-- Business Logic <-- Entity Framework Model <-- Database"? Authentication? Validation? Is it relly asset for you? At the moment the only thing I see: with RIA services usage you don't need to host WCF service manually and don't need to configure any references on the client side (clien side == Silverlight application). Probably I don't know some very useful features of the RIA Services? So could you please point me to the good doc for that? Many thanks.

    Read the article

  • SmartGWT Canvas width problem

    - by Doug
    I am having problems with showing my entire SmartGWT Canvas on my initial page. I've stripped everything out and am left with an extremely basic EntryPoint to illustrate my issue. When I just create a Canvas and add it to the root panel, I get a horizontal scrollbar in my browser. Can anyone explain why and what I can do to have the Canvas sized to the width of the window? Thanks in advance. public class TestModule implements EntryPoint { protected Canvas view = null; /** * This is the entry point method. */ public void onModuleLoad() { view = new Canvas(); view.setWidth100(); view.setHeight100(); view.setShowEdges( true ); RootPanel.get().add( view ); } }

    Read the article

  • INetCfgComponent::RaisePropertyUi arguments

    - by Soo Wei Tan
    I'm trying to do some COM interop and attempting to invoke the INetCfgComponent::RaisePropertyUi method. I've gotten to the point where I can enumerate devices and get a valid INetCfgComponent for the adapter that I want to display the UI for. However, I'm a COM newbie (let alone COM interop) so I have no idea what the third argument in RaisePropertyUi() is meant to be. I've tried passing in the INetCfgComponent object that I have, but that just results in a InvalidCastException. MSDN has the following to say about the argument: Pointer to the IUnknown interface. RaisePropertyUi retrieves from IUnknown the interface of the context in which to display a network component's property sheet. RaisePropertyUi uses this interface to restrict the display of the property sheet to the context of a connection.

    Read the article

  • Load and Web Performance Testing using Visual Studio Ultimate 2010-Part 3

    - by Tarun Arora
    Welcome back once again, in Part 1 of Load and Web Performance Testing using Visual Studio 2010 I talked about why Performance Testing the application is important, the test tools available in Visual Studio Ultimate 2010 and various test rig topologies, in Part 2 of Load and Web Performance Testing using Visual Studio 2010 I discussed the details of web performance & load tests as well as why it’s important to follow a goal based pattern while performance testing your application. In part 3 I’ll be discussing Test Result Analysis, Test Result Drill through, Test Report Generation, Test Run Comparison, Asp.net Profiler and some closing thoughts. Test Results – I see some creepy worms! In Part 2 we put together a web performance test and a load test, lets run the test to see load test to see how the Web site responds to the load simulation. While the load test is running you will be able to see close to real time analysis in the Load Test Analyser window. You can use the Load Test Analyser to conduct load test analysis in three ways: Monitor a running load test - A condensed set of the performance counter data is maintained in memory. To prevent the results memory requirements from growing unbounded, up to 200 samples for each performance counter are maintained. This includes 100 evenly spaced samples that span the current elapsed time of the run and the most recent 100 samples.         After the load test run is completed - The test controller spools all collected performance counter data to a database while the test is running. Additional data, such as timing details and error details, is loaded into the database when the test completes. The performance data for a completed test is loaded from the database and analysed by the Load Test Analyser. Below you can see a screen shot of the summary view, this provides key results in a format that is compact and easy to read. You can also print the load test summary, this is generated after the test has completed or been stopped.         Analyse the load test results of a previously run load test – We’ll see this in the section where i discuss comparison between two test runs. The performance counters can be plotted on the graphs. You also have the option to highlight a selected part of the test and view details, drill down to the user activity chart where you can hover over to see more details of the test run.   Generate Report => Test Run Comparisons The level of reports you can generate using the Load Test Analyser is astonishing. You have the option to create excel reports and conduct side by side analysis of two test results or to track trend analysis. The tools also allows you to export the graph data either to MS Excel or to a CSV file. You can view the ASP.NET profiler report to conduct further analysis as well. View Data and Diagnostic Attachments opens the Choose Diagnostic Data Adapter Attachment dialog box to select an adapter to analyse the result type. For example, you can select an IntelliTrace adapter, click OK and open the IntelliTrace summary for the test agent that was used in the load test.   Compare results This creates a set of reports that compares the data from two load test results using tables and bar charts. I have taken these screen shots from the MSDN documentation, I would highly recommend exploring the wealth of knowledge available on MSDN. Leaving Thoughts While load testing the application with an excessive load for a longer duration of time, i managed to bring the IIS to its knees by piling up a huge queue of requests waiting to be processed. This clearly means that the IIS had run out of threads as all the threads were busy processing existing request, one easy way of fixing this is by increasing the default number of allocated threads, but this might escalate the problem. The better suggestion is to try and drill down to the actual root cause of the problem. When ever the garbage collection runs it stops processing any pages so all requests that come in during that period are queued up, but realistically the garbage collection completes in fraction of a a second. To understand this better lets look at the .net heap, it is divided into large heap and small heap, anything greater than 85kB in size will be allocated to the Large object heap, the Large object heap is non compacting and remember large objects are expensive to move around, so if you are allocating something in the large object heap, make sure that you really need it! The small object heap on the other hand is divided into generations, so all objects that are supposed to be short-lived are suppose to live in Gen-0 and the long living objects eventually move to Gen-2 as garbage collection goes through.  As you can see in the picture below all < 85 KB size objects are first assigned to Gen-0, when Gen-0 fills up and a new object comes in and finds Gen-0 full, the garbage collection process is started, the process checks for all the dead objects and assigns them as the valid candidate for deletion to free up memory and promotes all the remaining objects in Gen-0 to Gen-1. So in the future when ever you clean up Gen-1 you have to clean up Gen-0 as well. When you fill up Gen – 0 again, all of Gen – 1 dead objects are drenched and rest are moved to Gen-2 and Gen-0 objects are moved to Gen-1 to free up Gen-0, but by this time your Garbage collection process has started to take much more time than it usually takes. Now as I mentioned earlier when garbage collection is being run all page requests that come in during that period are queued up. Does this explain why possibly page requests are getting queued up, apart from this it could also be the case that you are waiting for a long running database process to complete.      Lets explore the heap a bit more… What is really a case of crisis is when the objects are living long enough to make it to Gen-2 and then dying, this is definitely a high cost operation. But sometimes you need objects in memory, for example when you cache data you hold on to the objects because you need to use them right across the user session, which is acceptable. But if you wanted to see what extreme caching can do to your server then write a simple application that chucks in a lot of data in cache, run a load test over it for about 10-15 minutes, forcing a lot of data in memory causing the heap to run out of memory. If you get to such a state where you start running out of memory the IIS as a mode of recovery restarts the worker process. It is great way to free up all your memory in the heap but this would clear the cache. The problem with this is if the customer had 10 items in their shopping basket and that data was stored in the application cache, the user basket will now be empty forcing them either to get frustrated and go to a competitor website or if the customer is really patient, give it another try! How can you address this, well two ways of addressing this; 1. Workaround – A x86 bit processor only allows a maximum of 4GB of RAM, this means the machine effectively has around 3.4 GB of RAM available, the OS needs about 1.5 GB of RAM to run efficiently, the IIS and .net framework also need their share of memory, leaving you a heap of around 800 MB to play with. Because Team builds by default build your application in ‘Compile as any mode’ it means the application is build such that it will run in x86 bit mode if run on a x86 bit processor and run in a x64 bit mode if run on a x64 but processor. The problem with this is not all applications are really x64 bit compatible specially if you are using com objects or external libraries. So, as a quick win if you compiled your application in x86 bit mode by changing the compile as any selection to compile as x86 in the team build, you will be able to run your application on a x64 bit machine in x86 bit mode (WOW – By running Windows on Windows) and what that means is, you could use 8GB+ worth of RAM, if you take away everything else your application will roughly get a heap size of at least 4 GB to play with, which is immense. If you need a heap size of more than 4 GB you have either build a software for NASA or there is something fundamentally wrong in your application. 2. Solution – Now that you have put a workaround in place the IIS will not restart the worker process that regularly, which means you can take a breather and start working to get to the root cause of this memory leak. But this begs a question “How do I Identify possible memory leaks in my application?” Well i won’t say that there is one single tool that can tell you where the memory leak is, but trust me, ‘Performance Profiling’ is a great start point, it definitely gets you started in the right direction, let’s have a look at how. Performance Wizard - Start the Performance Wizard and select Instrumentation, this lets you measure function call counts and timings. Before running the performance session right click the performance session settings and chose properties from the context menu to bring up the Performance session properties page and as shown in the screen shot below, check the check boxes in the group ‘.NET memory profiling collection’ namely ‘Collect .NET object allocation information’ and ‘Also collect the .NET Object lifetime information’.    Now if you fire off the profiling session on your pages you will notice that the results allows you to view ‘Object Lifetime’ which shows you the number of objects that made it to Gen-0, Gen-1, Gen-2, Large heap, etc. Another great feature about the profile is that if your application has > 5% cases where objects die right after making to the Gen-2 storage a threshold alert is generated to alert you. Since you have the option to also view the most expensive methods and by capturing the IntelliTrace data you can drill in to narrow down to the line of code that is the root cause of the problem. Well now that we have seen how crucial memory management is and how easy Visual Studio Ultimate 2010 makes it for us to identify and reproduce the problem with the best of breed tools in the product. Caching One of the main ways to improve performance is Caching. Which basically means you tell the web server that instead of going to the database for each request you keep the data in the webserver and when the user asks for it you serve it from the webserver itself. BUT that can have consequences! Let’s look at some code, trust me caching code is not very intuitive, I define a cache key for almost all searches made through the common search page and cache the results. The approach works fine, first time i get the data from the database and second time data is served from the cache, significant performance improvement, EXCEPT when two users try to do the same operation and run into each other. But it is easy to handle this by adding the lock as you can see in the snippet below. So, as long as a user comes in and finds that the cache is empty, the user locks and starts to get the cache no more concurrency issues. But lets say you are processing 10 requests per second, by the time i have locked the operation to get the results from the database, 9 other users came in and found that the cache key is null so after i have come out and populated the cache they will still go in to get the results again. The application will still be faster because the next set of 10 users and so on would continue to get data from the cache. BUT if we added another null check after locking to build the cache and before actual call to the db then the 9 users who follow me would not make the extra trip to the database at all and that would really increase the performance, but didn’t i say that the code won’t be very intuitive, may be you should leave a comment you don’t want another developer to come in and think what a fresher why is he checking for the cache key null twice !!! The downside of caching is, you are storing the data outside of the database and the data could be wrong because the updates applied to the database would make the data cached at the web server out of sync. So, how do you invalidate the cache? Well if you only had one way of updating the data lets say only one entry point to the data update you can write some logic to say that every time new data is entered set the cache object to null. But this approach will not work as soon as you have several ways of feeding data to the system or your system is scaled out across a farm of web servers. The perfect solution to this is Micro Caching which means you cache the query for a set time duration and invalidate the cache after that set duration. The advantage is every time the user queries for that data with in the time span for which you have cached the results there are no calls made to the database and the data is served right from the server which makes the response immensely quick. Now figuring out the appropriate time span for which you micro cache the query results really depends on the application. Lets say your website gets 10 requests per second, if you retain the cache results for even 1 minute you will have immense performance gains. You would reduce 90% hits to the database for searching. Ever wondered why when you go to e-bookers.com or xpedia.com or yatra.com to book a flight and you click on the book button because the fare seems too exciting and you get an error message telling you that the fare is not valid any more. Yes, exactly => That is a cache failure! These travel sites or price compare engines are not going to hit the database every time you hit the compare button instead the results will be served from the cache, because the query results are micro cached, its a perfect trade-off, by micro caching the results the site gains 100% performance benefits but every once in a while annoys a customer because the fare has expired. But the trade off works in the favour of these sites as they are still able to process up to 30+ page requests per second which means cater to the site traffic by may be losing 1 customer every once in a while to a competitor who is also using a similar caching technique what are the odds that the user will not come back to their site sooner or later? Recap   Resources Below are some Key resource you might like to review. I would highly recommend the documentation, walkthroughs and videos available on MSDN. You can always make use of Fiddler to debug Web Performance Tests. Some community test extensions and plug ins available on Codeplex might also be of interest to you. The Road Ahead Thank you for taking the time out and reading this blog post, you may also want to read Part I and Part II if you haven’t so far. If you enjoyed the post, remember to subscribe to http://feeds.feedburner.com/TarunArora. Questions/Feedback/Suggestions, etc please leave a comment. Next ‘Load Testing in the cloud’, I’ll be working on exploring the possibilities of running Test controller/Agents in the Cloud. See you on the other side! Thank You!   Share this post : CodeProject

    Read the article

  • How to insert a date to an Open XML worksheet?

    - by Manuel
    I'm using Microsoft Open XML SDK 2 and I'm having a really hard time inserting a date into a cell. I can insert numbers without a problem by setting Cell.DataType = CellValues.Number, but when I do the same with a date (Cell.DataType = CellValues.Date) Excel 2010 crashes (2007 too). I tried setting the Cell.Text value to many date formats as well as Excel's date/numeric format to no avail. I also tried to use styles, removing the type attribute, plus many other pizzas I threw at the wall... Can anyone point me to an example inserting a date to a worksheet? Thanks,

    Read the article

  • Base 36 to Base 10 conversion using SQL only.

    - by EvilTeach
    A situation has arisen where I need to perform a base 36 to base 10 conversion, in the context of a SQL statement. There doesn't appear to be anything built into Oracle 9, or Oracle 10 to address this sort of thing. My Google-Fu, and AskTom suggest creating a pl/sql function to deal with the task. That is not an option for me at this point. I am looking for suggestions on an approach to take that might help me solve this issue. To put this into a visual form... WITH Base36Values AS ( SELECT '0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZ' myBase36 FROM DUAL ), TestValues AS ( SELECT '01Z' BASE36_VALUE, 71 BASE10_VALUE FROM DUAL ) SELECT * FROM Base36Values, TestValues I am looking for something to calculate the value 71, based on the input 01Z. As a bribe, each useful answer gets a free upvote. Thanks Evil.

    Read the article

  • Refinery CMS (Rails): Creating a plugin or plugins with multiple models and relationships

    - by jklina
    My goal is to create a way for an admin to create two models in the Refinery admin: Campaigns and Videos I would like to have it setup so that a Campaign has many Videos and that each Video belongs to a Campaign. Both Videos and Campaigns will have a title, description, and a preview image. I'm not certain of the best way to go about this. Is it possible to setup two plugins and form a relationship between the two? Or, should I create one plugin with both models. If someone could point me in the right direction or a good example of a solution to a similar problem, I would be grateful. Thank you for looking!

    Read the article

< Previous Page | 470 471 472 473 474 475 476 477 478 479 480 481  | Next Page >