Search Results

Search found 24301 results on 973 pages for 'execution process mfg'.

Page 630/973 | < Previous Page | 626 627 628 629 630 631 632 633 634 635 636 637  | Next Page >

  • How to connect a new query script with SSMS add-in?

    - by squillman
    I'm trying to create a SSMS add-in. One of the things I want to do is to create a new query window and programatically connect it to a server instance (in the context of a SQL Login). I can create the new query script window just fine but I can't find how to connect it without first manually connecting to something else (like the Object Explorer). So in other words, if I connect Obect Explorer to a SQL instance manually and then execute the method of my add-in that creates the query window I can connect it using this code: ServiceCache.ScriptFactory.CreateNewBlankScript( Editors.ScriptType.Sql, ServiceCache.ScriptFactory.CurrentlyActiveWndConnectionInfo.UIConnectionInfo, null); But I don't want to rely on CurrentlyActiveWndConnectionInfo.UIConnectionInfo for the connection. I want to set a SQL Login username and password programatically. Does anyone have any ideas? EDIT: I've managed to get the query window connected by setting the last parameter to an instance of System.Data.SqlClient.SqlConnection. However, the connection uses the context of the last login that was connected instead of what I'm trying to set programatically. That is, the user it connects as is the one selected in the Connection Dialog that you get when you click the New Query button and don't have an Object Explorer connected. EDIT2: I'm writing (or hoping to write) an add-in to automatically send a SQL statement and the execution results to our case-tracking system when run against our production servers. One thought I had was to remove write permissions and assign logins through this add-in which will also force the user to enter a case # canceling the statement if it's not there. Another thought I've just had is to inspect the server name in ServiceCache.ScriptFactory.CurrentlyActiveWndConnectionInfo.UIConnectionInfo and compare it to our list of production servers. If it matches and there's no case # then cancel the query.

    Read the article

  • Why does VBA Find loop fail when called from Evaluate?

    - by Abiel
    I am having some problems running a find loop inside of a subroutine when the routine is called using the Application.Evaluate or ActiveSheet.Evaluate method. For example, in the code below, I define a subroutine FindSub() which searches the sheet for a string "xxx". The routine CallSub() calls the FindSub() routine using both a standard Call statement and Evaluate. When I run Call FindSub, everything will work as expected: each matching address gets printed out to the immediate window and we get a final message "Finished up" when the code is done. However, when I do Application.Evaluate "FindSub()", only the address of the first match gets printed out, and we never reach the "Finished up" message. In other words, an error is encountered after the Cells.FindNext line as the loop tries to evaluate whether it should continue, and program execution stops without any runtime error being printed. I would expect both Call FindSub and Application.Evaluate "FindSub()" to yield the same results in this case. Can someone explain why they do not, and if possible, a way to fix this? Thanks. Note: In this example I obviously do not need to use Evaluate. This version is simplified to just focus on the particular problem I am having in a more complex situation. Sub CallSub() Call FindSub Application.Evaluate "FindSub()" End Sub Sub FindSub() Dim rngFoundCell As Range Dim rngFirstCell As Range Set rngFoundCell = Cells.Find(What:="xxx", after:=ActiveCell, LookIn:=xlValues, _ LookAt:=xlPart, SearchOrder:=xlByRows, SearchDirection:=xlNext, _ MatchCase:=False, SearchFormat:=False) If Not rngFoundCell Is Nothing Then Set rngFirstCell = rngFoundCell Do Debug.Print rngFoundCell.Address Set rngFoundCell = Cells.FindNext(after:=rngFoundCell) Loop Until (rngFoundCell Is Nothing) Or (rngFoundCell.Address = rngFirstCell.Address) End If Debug.Print "Finished up" End Sub

    Read the article

  • Java Access Token PKCS11 Not found Provider

    - by oracleruiz
    Hello I'm trying to access the keystore from my smartcard in Java. And I'm using the following code.. I'm using the Pkcs11 implementation of OpenSc http://www.opensc-project.org/opensc File windows.cnf = name=dnie library=C:\WINDOWS\system32\opensc-pkcs11.dll Java Code = String configName = "windows.cnf" String PIN = "####"; Provider p = new sun.security.pkcs11.SunPKCS11(configName); Security.addProvider(p); KeyStore keyStore = KeyStore.getInstance("PKCS11", "SunPKCS11-dnie"); =)(= char[] pin = PIN.toCharArray(); keyStore.load(null, pin); When the execution goes by the line with =)(= throws me the following exeption java.security.KeyStoreException: PKCS11 not found at java.security.KeyStore.getInstance(KeyStore.java:635) at ObtenerDatos.LeerDatos(ObtenerDatos.java:52) at ObtenerDatos.obtenerNombre(ObtenerDatos.java:19) at main.main(main.java:27) Caused by: java.security.NoSuchAlgorithmException: no such algorithm: PKCS11 for provider SunPKCS11-dnie at sun.security.jca.GetInstance.getService(GetInstance.java:70) at sun.security.jca.GetInstance.getInstance(GetInstance.java:190) at java.security.Security.getImpl(Security.java:662) at java.security.KeyStore.getInstance(KeyStore.java:632) I think the problem is "SunPKCS11-dnie", but I don't know to put there. I had tried with a lot of combinations... Anyone can help me...

    Read the article

  • How to create resource manager in ASP.NET

    - by Bart
    Hello, I would like to create resource manager on my page and use some data stored in my resource files. (default.aspx.resx and default.aspx.en.resx) The code looks like this: System.Resources.ResourceManager myResourceManager = System.Resources.ResourceManager.CreateFileBasedResourceManager("resource", Server.MapPath("App_LocalResources") + Path.DirectorySeparatorChar, null); if (User.Identity.IsAuthenticated) { Welcome.Text = myResourceManager.GetString("LoggedInWelcomeText"); } else { Welcome.Text = myResourceManager.GetString("LoggedOutWelcomeText"); } but when i compile and run it on my local server i get this type of error: Could not find any resources appropriate for the specified culture (or the neutral culture) on disk. baseName: resource locationInfo: fileName: resource.resources Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code. Exception Details: System.Resources.MissingManifestResourceException: Could not find any resources appropriate for the specified culture (or the neutral culture) on disk. baseName: resource locationInfo: fileName: resource.resources Source Error: Line 89: else Line 90: { Line 91: Welcome.Text = myResourceManager.GetString("LoggedOutWelcomeText"); Line 92: } Line 93: can you please assist me with this issue?

    Read the article

  • Why is the Clojure Hello World program so slow compared to Java and Python?

    - by viksit
    Hi all, I'm reading "Programming Clojure" and I was comparing some languages I use for some simple code. I noticed that the clojure implementations were the slowest in each case. For instance, Python - hello.py def hello_world(name): print "Hello, %s" % name hello_world("world") and result, $ time python hello.py Hello, world real 0m0.027s user 0m0.013s sys 0m0.014s Java - hello.java import java.io.*; public class hello { public static void hello_world(String name) { System.out.println("Hello, " + name); } public static void main(String[] args) { hello_world("world"); } } and result, $ time java hello Hello, world real 0m0.324s user 0m0.296s sys 0m0.065s and finally, Clojure - hellofun.clj (defn hello-world [username] (println (format "Hello, %s" username))) (hello-world "world") and results, $ time clj hellofun.clj Hello, world real 0m1.418s user 0m1.649s sys 0m0.154s Thats a whole, garangutan 1.4 seconds! Does anyone have pointers on what the cause of this could be? Is Clojure really that slow, or are there JVM tricks et al that need to be used in order to speed up execution? More importantly - isn't this huge difference in performance going to be an issue at some point? (I mean, lets say I was using Clojure for a production system - the gain I get in using lisp seems completely offset by the performance issues I can see here). The machine used here is a 2007 Macbook Pro running Snow Leopard, a 2.16Ghz Intel C2D and 2G DDR2 SDRAM. BTW, the clj script I'm using is from here and looks like, #!/bin/bash JAVA=/System/Library/Frameworks/JavaVM.framework/Versions/1.6/Home/bin/java CLJ_DIR=/opt/jars CLOJURE=$CLJ_DIR/clojure.jar CONTRIB=$CLJ_DIR/clojure-contrib.jar JLINE=$CLJ_DIR/jline-0.9.94.jar CP=$PWD:$CLOJURE:$JLINE:$CONTRIB # Add extra jars as specified by `.clojure` file if [ -f .clojure ] then CP=$CP:`cat .clojure` fi if [ -z "$1" ]; then $JAVA -server -cp $CP \ jline.ConsoleRunner clojure.lang.Repl else scriptname=$1 $JAVA -server -cp $CP clojure.main $scriptname -- $* fi

    Read the article

  • handling long running large transactions with perl dbi

    - by 1stdayonthejob
    I've got a large transaction comprising of getting lots of data from database A, do some manipulations with this data, then inserting the manipulated data into database B. I've only got permissions to select in database A but I can create tables and insert/update etc in database B. The manipulation and insertion part is written in perl and already in use for loading data into database B from other data sources, so all that's required is to get the necessary data from database A and using it to initialize the perl classes. How can I go about doing this so I can easily track back and pick up from where the error happened if any error occurs during the manipulation or insertion procedures (database disconnection, problems with class initialization because of invalid values, hard disk failure etc...)? Doing the transaction in one go doesn't seem like a good option because the amount data from database A means it would take at least a day or 2 for data manipulation and insertion into database B. The data from database A can be grouped into around 1000 groups using unique keys, with each key containing 1000s of rows each. One way I thought I could do is to write a script that does commits per group, meaning I've got to track which group has already been inserted into database B. The only way I can think of to track the progress of which groups have been processed or not is either in a log file or in a table in database B. A second way I thought could work is to dump all the necessary fields needed for loading the classes for manipulation and insertion into a flatfile, read the file to initialize the classes and insert into database B. This also means that I got to do some logging, but should narrow it down to the exact row in the flatfile if any error occurs. The script will look something like this: use strict; use warnings; use DBI; #connect to database A my $dbh = DBI->connect('dbi:oracle:my_db', $user, $password, { RaiseError => 1, AutoCommit => 0 }); #statement to get data based on group unique key my $sth = $dbh->prepare($my_sql); my @groups; #I have a list of this already open my $fh, '>>', 'my_logfile' or die "can't open logfile $!"; eval { foreach my $g (@groups){ #subroutine to check if group has already been processed, either from log file or from database table next if is_processed($g); $sth->execute($g); my $data = $sth->fetchall_arrayref; #manipulate $data, then use it to load perl classes for insertion into database B #. #. #. } print $fh "$g\n"; }; if ($@){ $dbh->rollback; die "something wrong...rollback"; } So if any errors do occur, I can just run this script again and it should skip the groups or rows that have been processed and continue. Both these methods is just variations on the same theme, and both require going back to where I've been tracking my progress (in table or file), skip the ones that've been commited to database B and process the remaining data. I'm sure there's a better way of doing this but am struggling to think of other solutions. Is there another way of handling large transactions between databases that require data manipulation between getting data out from one and inserting into another? The process doesn't need to be all in Perl, as long as I can reuse the perl classes for manipulating and inserting the data into the database.

    Read the article

  • Null Reference Exception In LINQ DataContext

    - by Frank
    I have a Null Reference Exception Caused by this code: var recentOrderers = (from p in db.CMS where p.ODR_DATE > DateTime.Today - new TimeSpan(60, 0, 0, 0) select p.SOLDNUM).Distinct(); result = (from p in db.CMS where p.ORDER_ST2 == "SH" && p.ODR_DATE > DateTime.Today - new TimeSpan(365, 0, 0, 0) && p.ODR_DATE < DateTime.Today - new TimeSpan(60, 0, 0, 0) && !(recentOrderers.Contains(p.SOLDNUM))/**/ select p.SOLDNUM).Distinct().Count(); result is of double type. When I comment out: !(recentOrderers.Contains(p.SOLDNUM)) The code runs fine. I have verified that recentOrderers is not null, and when I run: if(recentOrderes.Contains(0)) return; Execution follows this path and returns. Not sure what is going on, since I use similar code above it: var m = (from p in db.CMS where p.ORDER_ST2 == "SH" select p.SOLDNUM).Distinct(); double result = (from p in db.CUST join r in db.DEMGRAPH on p.CUSTNUM equals r.CUSTNUM where p.CTYPE3 == "cmh" && !(m.Contains(p.CUSTNUM)) && r.ColNEWMEMBERDAT.Value.Year > 1900 select p.CUSTNUM).Distinct().Count(); which also runs flawlessly. After noting the similarity, can anyone help? Thanks in advance. -Frank GTP, Inc.

    Read the article

  • Ldap query returns null result when deployed.

    - by Trey Carroll
    I'm using a very simple Ldap query in my asp.net mvc 2.0 site: String ldapPath = ConfigReader.LdapPath; String emailAddress = null; try { DirectorySearcher search = new DirectorySearcher(ConfigReader.LdapPath); search.Filter = String.Format("(&(objectClass=user)(objectCategory=person)(objectSid={0})) ", securityIdentifierValue); // add the mail property to the list of props to retrieve search.PropertiesToLoad.Add("mail"); var result = search.FindOne(); if (result == null) { throw new Exception("Ldap Query with filter:" + search.Filter.ToString() + " returned a null value (no match found)"); } else { emailAddress = result.Properties["mail"][0].ToString(); } } catch (ArgumentOutOfRangeException aoorEx) { throw new Exception( "The query could not find an email for this user."); } catch (Exception ex) { //_log.Error(string.Format("======!!!!!! ERROR ERROR ERROR !!!!! in LdapLookupUtil.cs getEmailFromLdap Exception: {0}", ex)); throw ex; } return emailAddress; It works fine on my localhost machine. It works fine when I run it in VS2010 on the server. It always returns a null result when deployed. Here is my web.config: Asp.Net Configuration option in Visual Studio. A full list of settings and comments can be found in machine.config.comments usually located in \Windows\Microsoft.Net\Framework\v2.x\Config -- section enables configuration of the security authentication mode used by ASP.NET to identify an incoming user. -- <!-- -- section enables configuration of what to do if/when an unhandled error occurs during the execution of a request. Specifically, it enables developers to configure html error pages to be displayed in place of a error stack trace. -- I'm running it under the default app pool. Does anybody see the problem? This is driving me crazy!

    Read the article

  • Investigating the root cause behind SharePoint's "request not found in the TrackedRequests"

    - by Muhimbi
    We have a long standing issue in our bug tracking system about the dreaded "ERROR: request not found in the TrackedRequests. We might be creating and closing webs on different threads." message in SharePoint's trace log. As we develop Workflow software for the SharePoint market, we look into this issue from time to time to make sure it is not caused by our products. I have personally come to the conclusion that this is a problem in SharePoint, but perhaps someone else can prove me wrong. Here is what I know: According to the hundreds of search results returned by Google on this topic, this issue appears to be mainly related to SharePoint Workflows, both SharePoint Designer and Visual Studio based workflows. Assuming ULS logging is set to Monitorable, the easiest way to reproduce this problem is to create a new SharePoint Designer Workflow, attach it to a document library, set it to auto start on add/update, don't add any actions, save the workflow and upload a file to the document library. The error is only visible in the SharePoint trace log, it does not appear to impact the execution of the workflow at hand. I have verified that the problem occurs on 32 bit as well as 64 bit systems, Win2K3 and 2K8, WSS and MOSS with SharePoint versions up to the December 2009 Cumulative Update (6524). The problem does not occur when a workflow is started manually. There are dozens of related posts on MSDN Forums, hundreds on Google, one on StackOverflow and none on SharePoint Overflow. There appears to be no answer. Does anyone have any idea about what is going on, what is causing this and if we should worry or file this under 'Red Herrings'.

    Read the article

  • mercurial: how to synchronize mq patches from a master repo as mq patches to a set of clone repos

    - by dim
    I have to run a dozen of different build tests on a code base maintained in a mercurial repository. I don't want to run serially these tests on same repository because they modify a set of common files and I want to run them in parallel on different machines. Also, after all tests are run I want to have access to latest test results from those test work areas. Currently I'm cloning the master repository a dozen of times and run in each clone one different test. Before each test execution I do a pull/update/purge preparation sequence in order to start the test on latest clean state. That's good for me. I'm also preparing new changes using mq extension that I would test on all clones as above before committing them. For testing some ready candidate mq patches I want somehow to deploy/synchronize them to be available in test clones and apply those ready for testing using some guard before running the test. Did anybody do this synchronization before? What's the most simple way to do it? Do I need to have versioned mq patches for that?

    Read the article

  • Is it possible that a single-threaded program is executed simultaneously on more than one CPU core?

    - by Wolfgang Plaschg
    When I run a single-threaded program that i have written on my quad core Intel i can see in the Windows Task Manager that actually all four cores of my CPU are more or less active. One core is more active than the other three, but there is also activity on those. There's no other program (besided the OS kernel of course) running that would be plausible for that activitiy. And when I close my program all activity an all cores drops down to nearly zero. All is left is a little "noise" on the cores, so I'm pretty sure all the visible activity comes directly or indirectly (like invoking system routines) from my program. Is it possible that the OS or the cores themselves try to balance some code or execution on all four cores, even it's not a multithreaded program? Do you have any links that documents this technique? Some infos to the program: It's a console app written in Qt, the Task Manager states that only one thread is running. Maybe Qt uses threads, but I don't use signals or slots, nor any GUI. Link to Task Manager screenshot: http://img97.imageshack.us/img97/6403/taskmanager.png This question is language agnostic and not tied to Qt/C++, i just want to know if Windows or Intel do to balance also single-threaded code on all cores. If they do, how does this technique work? All I can think of is, that kernel routines like reading from disk etc. is scheduled on all cores, but this won't improve performance significantly since the code still has to run synchronous to the kernel api calls. EDIT Do you know any tools to do a better analysis of single and/or multi-threaded programs than the poor Windows Task Manager?

    Read the article

  • Disable antialiasing for a specific GDI device context

    - by Jacob Stanley
    I'm using a third party library to render an image to a GDI DC and I need to ensure that any text is rendered without any smoothing/antialiasing so that I can convert the image to a predefined palette with indexed colors. The third party library i'm using for rendering doesn't support this and just renders text as per the current windows settings for font rendering. They've also said that it's unlikely they'll add the ability to switch anti-aliasing off any time soon. The best work around I've found so far is to call the third party library in this way (error handling and prior settings checks ommitted for brevity): private static void SetFontSmoothing(bool enabled) { int pv = 0; SystemParametersInfo(Spi.SetFontSmoothing, enabled ? 1 : 0, ref pv, Spif.None); } // snip Graphics graphics = Graphics.FromImage(bitmap) IntPtr deviceContext = graphics.GetHdc(); SetFontSmoothing(false); thirdPartyComponent.Render(deviceContext); SetFontSmoothing(true); This obviously has a horrible effect on the operating system, other applications flicker from cleartype enabled to disabled and back every time I render the image. So the question is, does anyone know how I can alter the font rendering settings for a specific DC? Even if I could just make the changes process or thread specific instead of affecting the whole operating system, that would be a big step forward! (That would give me the option of farming this rendering out to a separate process- the results are written to disk after rendering anyway) EDIT: I'd like to add that I don't mind if the solution is more complex than just a few API calls. I'd even be happy with a solution that involved hooking system dlls if it was only about a days work. EDIT: Background Information The third-party library renders using a palette of about 70 colors. After the image (which is actually a map tile) is rendered to the DC, I convert each pixel from it's 32-bit color back to it's palette index and store the result as an 8bpp greyscale image. This is uploaded to the video card as a texture. During rendering, I re-apply the palette (also stored as a texture) with a pixel shader executing on the video card. This allows me to switch and fade between different palettes instantaneously instead of needing to regenerate all the required tiles. It takes between 10-60 seconds to generate and upload all the tiles for a typical view of the world. EDIT: Renamed GraphicsDevice to Graphics The class GraphicsDevice in the previous version of this question is actually System.Drawing.Graphics. I had renamed it (using GraphicsDevice = ...) because the code in question is in the namespace MyCompany.Graphics and the compiler wasn't able resolve it properly. EDIT: Success! I even managed to port the PatchIat function below to C# with the help of Marshal.GetFunctionPointerForDelegate. The .NET interop team really did a fantastic job! I'm now using the following syntax, where Patch is an extension method on System.Diagnostics.ProcessModule: module.Patch( "Gdi32.dll", "CreateFontIndirectA", (CreateFontIndirectA original) => font => { font->lfQuality = NONANTIALIASED_QUALITY; return original(font); }); private unsafe delegate IntPtr CreateFontIndirectA(LOGFONTA* lplf); private const int NONANTIALIASED_QUALITY = 3; [StructLayout(LayoutKind.Sequential)] private struct LOGFONTA { public int lfHeight; public int lfWidth; public int lfEscapement; public int lfOrientation; public int lfWeight; public byte lfItalic; public byte lfUnderline; public byte lfStrikeOut; public byte lfCharSet; public byte lfOutPrecision; public byte lfClipPrecision; public byte lfQuality; public byte lfPitchAndFamily; public unsafe fixed sbyte lfFaceName [32]; }

    Read the article

  • WebSphere Application Server EJB Optimization

    - by Chris Aldrich
    We are working on developing a Java EE based application. Our application is Java 1.5 compatible and will be deployed to WAS ND 6.1.0.21 with EBJ 3.0 and Web Services feature packs. The configuration is currently one cell with two clusters. Each cluster will have two nodes. Our application, or our system, as I should rather say, comes in two or three parts. Part 1: An ear deployed to one cluster that contains 3rd party vendor code combined with customization code. Their code is EJB 2.0 compliant and has a lot of Remote Home interfaces. Part 2: An ear deployed to the same cluster as the first ear. This ear contains EBJ 3's that make calls into the EJB 2's supplied by the vendor and the custom code. These EJB 3's are used by the JSF UI also packaged with the EAR, and some of them are also exposed as web services (JAX-WS 2.0 with SOAP 1.2 compliance) for other clients. Part 3: There may be other services that do not depend on our vendor/custom code app. These services will be EJB 3.0's and web services that are deployed to the other cluster. Per a recommendation from some IBM staff on site here, communication between nodes in a cluster can be EJB RMI. But if we are going across clusters and/or other cells, then the communication should be web services. That said, some of us are wondering about performance and optimizing communication for speed of our applications that will use our web services and EJB's. Right now most EJB's are exposed as remote. (and our vendor set theirs up that way, rather than also exposing local home interfaces). We are wondering if WAS does any optimizations between apps in the same node/cluster node space. If two apps are installed in the same area and they call each other via remote home interface, is WAS smart enough to make it a local home interface call? Are their other optimization techniques? Should we consider them? Should we not? What are the costs/benefits? Here is the question from one of our team members as sent in their email: The question is: Supposing we develop our EJBs as remote EJBs, where our UI controller code is talking to our EXT java services via EJB3...what are our options for performance optimization when both the EJB server and client are running in the same container? As one point of reference, google has given me some oooooold websphere performance tuning documentation from 2000 that explains a tuning configuration you can set to enable Call By Reference for EJB communication when they're in the same application server JVM. It states the following: Because EJBs are inherently location independent, they use a remote programming model. Method parameters and return values are serialized over RMI-IIOP and returned by value. This is the intrinsic RMI "Call By Value" model. WebSphere provides the "No Local Copies" performance optimization for running EJBs and clients (typically servlets) in the same application server JVM. The "No Local Copies" option uses "Call By Reference" and does not create local proxies for called objects when both the client and the remote object are in the same process. Depending on your workload, this can result in a significant overhead savings. Configure "No Local Copies" by adding the following two command line parameters to the application server JVM: * -Djavax.rmi.CORBA.UtilClass=com.ibm.CORBA.iiop.Util * -Dcom.ibm.CORBA.iiop.noLocalCopies=true CAUTION: The "No Local Copies" configuration option improves performance by changing "Call By Value" to "Call By Reference" for clients and EJBs in the same JVM. One side effect of this is that the Java object derived (non-primitive) method parameters can actually be changed by the called enterprise bean. Consider Figure 16a: Also, we will also be using Process Server 6.2 and WESB 6.2 as well in the future. Any ideas? recommendations? Thanks

    Read the article

  • How do I Unit Test Actions without Mocking that use UpdateModel?

    - by Hellfire
    I have been working my way through Scott Guthrie's excellent post on ASP.NET MVC Beta 1. In it he shows the improvements made to the UpdateModel method and how they improve unit testing. I have recreated a similar project however anytime I run a UnitTest that contains a call to UpdateModel I receive an ArgumentNullException naming the controllerContext parameter. Here's the relevant bits, starting with my model: public class Country { public Int32 ID { get; set; } public String Name { get; set; } public String Iso3166 { get; set; } } The controller action: [AcceptVerbs(HttpVerbs.Post)] public ActionResult Edit(Int32 id, FormCollection form) { using ( ModelBindingDataContext db = new ModelBindingDataContext() ) { Country country = db.Countries.Where(c => c.CountryID == id).SingleOrDefault(); try { UpdateModel(country, form); db.SubmitChanges(); return RedirectToAction("Index"); } catch { return View(country); } } } And finally my unit test that's failing: [TestMethod] public void Edit() { CountryController controller = new CountryController(); FormCollection form = new FormCollection(); form.Add("Name", "Canada"); form.Add("Iso3166", "CA"); var result = controller.Edit(2 /*Canada*/, form) as RedirectToRouteResult; Assert.IsNotNull(result, "Expected to be redirected on successful POST."); Assert.AreEqual("Show", result.RouteName, "Expected to redirect to the View action."); } ArgumentNullException is thrown by the call to UpdateModel with the message "Value cannot be null. Parameter name: controllerContext". I'm assuming that somewhere the UpdateModel requires the System.Web.Mvc.ControllerContext which isn't present during execution of the test. I'm also assuming that I'm doing something wrong somewhere and just need to pointed in the right direction. Help Please!

    Read the article

  • applicationWillTerminate Appears to be Inconsistent

    - by Lauren Quantrell
    This one has me batty. In applicationWillTerminate I am doing two things: saving some settings to the app settings plist file and updating any changed data to the SQLite database referenced in the managedObjectContext. Problem is it works sometimes and not others. Same issue in the simulator and on the device. If I hit the home button while the app is running, I can only sometimes get the data to store in the plist and into the CoreData store. It seems that it's both works or neither works, and it makes no difference if I switch the execution order (saveState, managedObjectContext or managedObjectContext, saveState). I can't figure out how this can happen. Any help is greatly appreciated. lq AppDelegate.m @synthesize rootViewController; - (void)applicationWillTerminate:(UIApplication *)application { [rootViewController saveState]; NSError *error; if (managedObjectContext != nil) { if ([managedObjectContext hasChanges] && ![managedObjectContext save:&error]) { // Handle error exit(-1); // Fail } } } RootViewController.m - (void)saveState { NSUserDefaults *userDefaults = [NSUserDefaults standardUserDefaults]; [userDefaults setInteger:self.someInteger forKey:kSomeNumber]; [userDefaults setObject:self.someArray forKey:kSomeArray]; }

    Read the article

  • use of assertions for type checking in php?

    - by user151841
    I do some checking of arguments in my classes in php using exception-throwing functions. I have functions that do a basic check ( ===, in_array etc ) and throw an exception on false. So I can do assertNumeric($argument, "\$argument is not numeric."); instead of if ( ! is_numeric($argument) ) { throw new Exception("\$argument is not numeric."); } Saves some typing I was reading in the comments of the php manual page on assert() that As noted on Wikipedia - "assertions are primarily a development tool, they are often disabled when a program is released to the public." and "Assertions should be used to document logically impossible situations and discover programming errors— if the 'impossible' occurs, then something fundamental is clearly wrong. This is distinct from error handling: most error conditions are possible, although some may be extremely unlikely to occur in practice. Using assertions as a general-purpose error handling mechanism is usually unwise: assertions do not allow for graceful recovery from errors, and an assertion failure will often halt the program's execution abruptly. Assertions also do not display a user-friendly error message." This means that the advice given by "gk at proliberty dot com" to force assertions to be enabled, even when they have been disabled manually, goes against best practices of only using them as a development tool So, am I 'doing it wrong'? What other/better ways of doing this are there?

    Read the article

  • SQL Where Clause Against View

    - by Adam Carr
    I have a view (actually, it's a table valued function, but the observed behavior is the same in both) that inner joins and left outer joins several other tables. When I query this view with a where clause similar to SELECT * FROM [v_MyView] WHERE [Name] like '%Doe, John%' ... the query is very slow, but if I do the following... SELECT * FROM [v_MyView] WHERE [ID] in ( SELECT [ID] FROM [v_MyView] WHERE [Name] like '%Doe, John%' ) it is MUCH faster. The first query is taking at least 2 minutes to return, if not longer where the second query will return in less than 5 seconds. Any suggestions on how I can improve this? If I run the whole command as one SQL statement (without the use of a view) it is very fast as well. I believe this result is because of how a view should behave as a table in that if a view has OUTER JOINS, GROUP BYS or TOP ##, if the where clause was interpreted prior to vs after the execution of the view, the results could differ. My question is why wouldn't SQL optimize my first query to something as efficient as my second query?

    Read the article

  • Self - hosted WCF server and SSL

    - by jitm
    Hello, There is self - hosted WCF server (Not IIS), and was generated certificates (on the Win Xp) using command line like makecert.exe -sr CurrentUser -ss My -a sha1 -n CN=SecureClient -sky exchange -pe makecert.exe -sr CurrentUser -ss My -a sha1 -n CN=SecureServer -sky exchange -pe These certificates was added to the server code like this: serviceCred.ServiceCertificate.SetCertificate(StoreLocation.LocalMachine, StoreName.My, X509FindType.FindBySubjectName, "SecureServer"); serviceCred.ClientCertificate.SetCertificate(StoreLocation.LocalMachine, StoreName.My, X509FindType.FindBySubjectName, "SecureClient"); After all previous operation I created simple client to check SSL connection to the server. Client configuration: <?xml version="1.0" encoding="utf-8" ?> <configuration> <system.serviceModel> <bindings> <basicHttpBinding> <binding name="BasicHttpBinding_IAdminContract" closeTimeout="00:01:00" openTimeout="00:01:00" receiveTimeout="00:10:00" sendTimeout="00:01:00" allowCookies="false" bypassProxyOnLocal="false" hostNameComparisonMode="StrongWildcard" maxBufferSize="65536" maxBufferPoolSize="524288" maxReceivedMessageSize="65536" messageEncoding="Text" textEncoding="utf-8" transferMode="Buffered" useDefaultWebProxy="true"> <readerQuotas maxDepth="32" maxStringContentLength="8192" maxArrayLength="16384" maxBytesPerRead="4096" maxNameTableCharCount="16384" /> <security mode="TransportCredentialOnly"> <transport clientCredentialType="Basic"/> </security> </binding> </basicHttpBinding> </bindings> <client> <endpoint address="https://myhost:8002/Admin" binding="basicHttpBinding" bindingConfiguration="BasicHttpBinding_IAdminContract" contract="Admin.IAdminContract" name="BasicHttpBinding_IAdminContract" /> </client> </system.serviceModel> </configuration> Code: Admin.AdminContractClient client = new AdminContractClient("BasicHttpBinding_IAdminContract"); client.ClientCredentials.UserName.UserName = "user"; client.ClientCredentials.UserName.Password = "pass"; var result = client.ExecuteMethod() During execution I receiving next error: The provided URI scheme 'https' is invalid; expected 'http'.\r\nParameter name: via Question: How to enable ssl for self-hosted server and where should I set - up certificates for client and server ? Thanks.

    Read the article

  • Optimizing PHP require_once's for low disk i/o?

    - by buggedcom
    Q1) I'm designing a CMS (-who isn't!) but priority is being given to caching. Literally everything is cached. DB rows, DB id queries, Configuration data, processed data, compiled templates. Currently it has two layers of caching. The first is a opcode cache or memory cache such as apc, eaccelerator, xcache or memcached. If an entry is not found in there it is then searched for in the secondary slow cache, ie php includes. Are the opcode caches actually faster than doing a require_once to a php file with a var_export'd array of data in it? My tests are inconclusive as my development box (5.3 of XAMPP) keeps throwing errors installing any of the aforementioned programs. Q2) The CMS has numerous helper classes that are autoloaded on demand instead of loading all files. Mostly each has a require before it so no autoloading needs to take place, however this is not the question. Because a page script can have up to 50/60 helper files included I have a feeling that if the site was under pressure it would buckle because of all the i/o that this incurs. Ignore for the moment that there is output cache in place that would remove the need for what I am about to suggest, and also that opcode caches would render this moot. What I have tried to do is join all the helper files required for the scripts execution in one single file. This is achievable and works well, however it has a side effect of greatly increasing the memory usage dramatically even though technically the same code is being used. What are your thoughts and opinions on this?

    Read the article

  • Registry access error when Migrating ASP.NET application to IIS7

    - by fande455
    I'm running windows 7 64-bit and iis7. I'm trying to setup a web application that was previously in iis6 on XP. It's giving me the error below. I've added the network service user to the Performance Monitor Users group to no avail. Access to the registry key 'Global' is denied. Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code. Exception Details: System.UnauthorizedAccessException: Access to the registry key 'Global' is denied. ASP.NET is not authorized to access the requested resource. Consider granting access rights to the resource to the ASP.NET request identity. ASP.NET has a base process identity (typically {MACHINE}\ASPNET on IIS 5 or Network Service on IIS 6) that is used if the application is not impersonating. If the application is impersonating via , the identity will be the anonymous user (typically IUSR_MACHINENAME) or the authenticated request user. To grant ASP.NET access to a file, right-click the file in Explorer, choose "Properties" and select the Security tab. Click "Add" to add the appropriate user or group. Highlight the ASP.NET account, and check the boxes for the desired access.

    Read the article

  • IUsable: controlling resources in a better way than IDisposable

    - by Ilya Ryzhenkov
    I wish we have "Usable" pattern in C#, when code block of using construct would be passed to a function as delegate: class Usable : IUsable { public void Use(Action action) // implements IUsable { // acquire resources action(); // release resources } } and in user code: using (new Usable()) { // this code block is converted to delegate and passed to Use method above } Pros: Controlled execution, exceptions The fact of using "Usable" is visible in call stack Cons: Cost of delegate Do you think it is feasible and useful, and if it doesn't have any problems from the language point of view? Are there any pitfalls you can see? EDIT: David Schmitt proposed the following using(new Usable(delegate() { // actions here }) {} It can work in the sample scenario like that, but usually you have resource already allocated and want it to look like this: using (Repository.GlobalResource) { // actions here } Where GlobalResource (yes, I know global resources are bad) implements IUsable. You can rewrite is as short as Repository.GlobalResource.Use(() => { // actions here }); But it looks a little bit weird (and more weird if you implement interface explicitly), and this is so often case in various flavours, that I thought it deserve to be new syntactic sugar in a language.

    Read the article

  • Java thread dump where main thread has no call stack? (jsvc)

    - by dwhsix
    We have a java process running as a daemon (under jsvc). Every several days it just stops doing any work; output to the logfile stops (it is pretty verbose, on 5-minute intervals) and it consumes no CPU or IO. There are no exceptions logged in the logfile nor in syserr or sysout. The last log statement is just prior to a db commit being done, but there is no open connection on the db server (MySQL) and reviewing the code, there should always be additional log output after that, even if it had encountered an exception that was going to bubble up. The most curious thing I find is that in the thread dump (included below), there's no thread in our code at all, and the main thread seems to have no context whatsoever: "main" prio=10 tid=0x0000000000614000 nid=0x445d runnable [0x0000000000000000] java.lang.Thread.State: RUNNABLE As noted earlier, this is a daemon process running using jsvc, but I don't know if that has anything to do with it (I can restructure the code to also allow running it directly, to test). Any suggestions on what might be happening here? Thanks... dwh Full thread dump: Full thread dump Java HotSpot(TM) 64-Bit Server VM (14.2-b01 mixed mode): "MySQL Statement Cancellation Timer" daemon prio=10 tid=0x00002aaaf81b8800 nid=0x447b in Object.wait() [0x00002aaaf6a22000] java.lang.Thread.State: WAITING (on object monitor) at java.lang.Object.wait(Native Method) - waiting on <0x00002aaab5556d50> (a java.util.TaskQueue) at java.lang.Object.wait(Object.java:485) at java.util.TimerThread.mainLoop(Timer.java:483) - locked <0x00002aaab5556d50> (a java.util.TaskQueue) at java.util.TimerThread.run(Timer.java:462) "Low Memory Detector" daemon prio=10 tid=0x00000000006a4000 nid=0x4479 runnable [0x0000000000000000] java.lang.Thread.State: RUNNABLE "CompilerThread1" daemon prio=10 tid=0x00000000006a1000 nid=0x4477 waiting on condition [0x0000000000000000] java.lang.Thread.State: RUNNABLE "CompilerThread0" daemon prio=10 tid=0x000000000069d000 nid=0x4476 waiting on condition [0x0000000000000000] java.lang.Thread.State: RUNNABLE "Signal Dispatcher" daemon prio=10 tid=0x000000000069b000 nid=0x4465 waiting on condition [0x0000000000000000] java.lang.Thread.State: RUNNABLE "Finalizer" daemon prio=10 tid=0x0000000000678800 nid=0x4464 in Object.wait() [0x00002aaaf61d6000] java.lang.Thread.State: WAITING (on object monitor) at java.lang.Object.wait(Native Method) - waiting on <0x00002aaab54a1cb8> (a java.lang.ref.ReferenceQueue$Lock) at java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:118) - locked <0x00002aaab54a1cb8> (a java.lang.ref.ReferenceQueue$Lock) at java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:134) at java.lang.ref.Finalizer$FinalizerThread.run(Finalizer.java:159) "Reference Handler" daemon prio=10 tid=0x0000000000676800 nid=0x4463 in Object.wait() [0x00002aaaf60d5000] java.lang.Thread.State: WAITING (on object monitor) at java.lang.Object.wait(Native Method) - waiting on <0x00002aaab54a1cf0> (a java.lang.ref.Reference$Lock) at java.lang.Object.wait(Object.java:485) at java.lang.ref.Reference$ReferenceHandler.run(Reference.java:116) - locked <0x00002aaab54a1cf0> (a java.lang.ref.Reference$Lock) "main" prio=10 tid=0x0000000000614000 nid=0x445d runnable [0x0000000000000000] java.lang.Thread.State: RUNNABLE "VM Thread" prio=10 tid=0x0000000000670000 nid=0x4462 runnable "GC task thread#0 (ParallelGC)" prio=10 tid=0x000000000061e000 nid=0x445e runnable "GC task thread#1 (ParallelGC)" prio=10 tid=0x0000000000620000 nid=0x445f runnable "GC task thread#2 (ParallelGC)" prio=10 tid=0x0000000000622000 nid=0x4460 runnable "GC task thread#3 (ParallelGC)" prio=10 tid=0x0000000000623800 nid=0x4461 runnable "VM Periodic Task Thread" prio=10 tid=0x00000000006a6800 nid=0x447a waiting on condition JNI global references: 797 Heap PSYoungGen total 162944K, used 48388K [0x00002aaadff40000, 0x00002aaaf2ab0000, 0x00002aaaf5490000) eden space 102784K, 47% used [0x00002aaadff40000,0x00002aaae2e81170,0x00002aaae63a0000) from space 60160K, 0% used [0x00002aaaeb850000,0x00002aaaeb850000,0x00002aaaef310000) to space 86720K, 0% used [0x00002aaae63a0000,0x00002aaae63a0000,0x00002aaaeb850000) PSOldGen total 699072K, used 699072K [0x00002aaab5490000, 0x00002aaadff40000, 0x00002aaadff40000) object space 699072K, 100% used [0x00002aaab5490000,0x00002aaadff40000,0x00002aaadff40000) PSPermGen total 21248K, used 9252K [0x00002aaab0090000, 0x00002aaab1550000, 0x00002aaab5490000) object space 21248K, 43% used [0x00002aaab0090000,0x00002aaab09993e8,0x00002aaab1550000)

    Read the article

  • How to add an image to an SSRS report with a dynamic url?

    - by jrummell
    I'm trying to add an image to a report. The image src url is an IHttpHandler that takes a few query string parameters. Here's an example: <img src="Image.ashx?item=1234567890&lot=asdf&width=50" alt=""/> I added an Image to a cell and then set Source to External and Value to the following expression: ="Image.ashx?item="+Fields!ItemID.Value+"&lot="+Fields!LotID.Value+"&width=50" But when I view the report, it renders the image html as: <IMG SRC="" /> What am I missing? Update Even if I set Value to "image.jpg" it still renders an empty src attribute. I'm not sure if it makes a difference, but I'm using this with a VS 2008 ReportViewer control in Remote processing mode. Update I was able to get the images to display in the Report Designer (VS 2005) with an absolute path (http://server/path/to/http/handler). But they didn't display on the Report Manager website. I even set up an Unattended Execution Account that has access to the external URLs.

    Read the article

  • How to form submit and show a different page in ASP.Net MVC?

    - by melaos
    hi guys i'm new to asp.net mvc.. so basically i just build up a two page app which takes the registration information of the user and post it to the database. i use a lot of jquery and ajax calls to retrieve data from the database using linq to sql stored proc object. and currently i'm stuck at one page where after the user submits the form it should redirect him to /Home/AddProduct. What i found was the error: Validation of viewstate MAC failed. If this application is hosted by a Web Farm or cluster, ensure that <machineKey> configuration specifies the same validationKey and validation algorithm. AutoGenerate cannot be used in a cluster. Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code. Exception Details: System.Web.HttpException: Validation of viewstate MAC failed. If this application is hosted by a Web Farm or cluster, ensure that <machineKey> configuration specifies the same validationKey and validation algorithm. AutoGenerate cannot be used in a cluster. what used on my form are basically a combination of html controls, asp.net controls and some asp.net mvc type controls. i submit the form using action="/Home/ProductAdded" and after doing some googling i found i was supposed to add in the machine key but after doing so, the index page becomes unviewable. because it couldn't find the index file now. removing the action helps, but now it just doesn't go anywhere. so what am i missing here? i feel i'm missing a lot of fundamentals understanding about asp.net mvc and i don't even know how to submit a form and go to a different page here!!

    Read the article

  • AuthorizationExecuteWithPrivileges and osascript failing

    - by cygnl7
    I'm attempting to execute an uninstaller (written in AppleScript) through AuthorizationExecuteWithPrivileges. I'm setting up my rights after creating an empty auth ref like so: char *tool = "/usr/bin/osascript"; AuthorizationItem items = {kAuthorizationRightExecute, strlen(tool), tool, 0}; AuthorizationRights rights = {sizeof(items)/sizeof(AuthorizationItem), &items}; AuthorizationFlags flags = kAuthorizationFlagDefaults | kAuthorizationFlagExtendRights | kAuthorizationFlagPreAuthorize | kAuthorizationFlagInteractionAllowed; status = AuthorizationCopyRights(authorizationRef, &rights, NULL, flags, NULL); Later I call: status = AuthorizationExecuteWithPrivileges(authorizationRef, tool, kAuthorizationFlagDefaults, (char *const *)args, NULL); On Snow Leopard this works fine, but on Leopard I get the following in syslog.log: Apr 19 15:30:09 hostname /usr/bin/osascript[39226]: OpenScripting.framework - 'gdut' event blocked in process with mixed credentials (issetugid=0 uid=501 euid=0 gid=20 egid=20) Apr 19 15:30:12: --- last message repeated 1 time --- ... Apr 19 15:30:12 hostname [0x0-0x2e92e9].com.example.uninstaller[39219]: /var/folders/vm/vmkIi0nYG8mHMrllaXaTgk+++TI/-Tmp-/TestApp_tmpfiles/Uninstall.scpt: Apr 19 15:30:12 hostname [0x0-0x2e92e9].com.example.uninstaller[39219]: execution error: «constant afdmasup» doesn’t understand the «event earsffdr» message. (-1708) Am I going about this all wrong? I just want to run the equivalent of "sudo /usr/bin/osascript ..."

    Read the article

< Previous Page | 626 627 628 629 630 631 632 633 634 635 636 637  | Next Page >