Search Results

Search found 22000 results on 880 pages for 'worker process'.

Page 601/880 | < Previous Page | 597 598 599 600 601 602 603 604 605 606 607 608  | Next Page >

  • Virtual pages for my plugin

    - by Fini
    Hi, I am currently in the process of making a WordPress Plugin which is going to parse some external data (products) from various web services and present them as normal pages in WordPress. I would like to avoid actually creating the pages programatically and instead just generate them on the fly to avoid any synchronization issues if a product is deleted and so forth. My plugin is going to have a base url in which it will hook on to, for example /products/, and then I would generate each product page by calling /products/some-product-name/. I also anticipate the need for uri's like /products/category/some-category-name/ which I will use to list all items in that category. Since I am new to WordPress plugin development, I am looking for some tips and advice to get me started on the right foot. Any help is highly appreciated ;)

    Read the article

  • TestCase scripting framework

    - by Mikey
    Hey guys, For our webapp testing environment we're currently using watin with a bunch of unit tests, and we're looking to move to selenium and use more frameworks. We're currently looking at Selenium2 + Gallio + Xunit.net, However one of the things we're really looking to get around is compiled testcases. Ideally we want testcases that can be edited in VS with intellisense, but don't require re-compilling the assembly every single time we make a small change, Are there any frameworks likely to help with this issue? Are there any nice UI tools to help manage massive ammount of testcases? Ideally we want the testcase writing process to be simple so that more testers can aid in writing them. cheers

    Read the article

  • Android app works on emulator but not on phone ("Can't dispatch DDM chunk XXXX: no handler defined")

    - by JR_vv2
    Hey all, I made a very simple application to start playing around with Android development. It works fine on the emulator, but it gives me the following error when I try to install it on my HTC Hero (v1.5): Sorry! The application Simple Dial (process com.foo.simpledial) has stopped unexpectedly. Please try again. (Force Close button) and on in the Eclipse console, I get the following message: [2010-06-14 23:10:52 - Simple Dial] Uploading Simple Dial.apk onto device 'HT9BSHF00222' [2010-06-14 23:10:53 - Simple Dial] Installing Simple Dial.apk... [2010-06-14 23:10:56 - Simple Dial] Success! [2010-06-14 23:10:56 - Simple Dial] Starting activity com.alanvaghti.simpledial.DialActivity on device [2010-06-14 23:10:57 - Simple Dial] ActivityManager: Can't dispatch DDM chunk 46454154: no handler defined [2010-06-14 23:10:57 - Simple Dial] ActivityManager: Can't dispatch DDM chunk 4d505251: no handler defined [2010-06-14 23:10:57 - Simple Dial] ActivityManager: Starting: Intent { action=android.intent.action.MAIN categories={android.intent.category.LAUNCHER} comp={com.alanvaghti.simpledial/com.alanvaghti.simpledial.DialActivity} } I did put 'android:debuggable="true"' inside the application tag on the manifest.xml Any ideas on what is going on?? Thanks in advance.

    Read the article

  • Java BufferedReader readline blocking?

    - by tgguy
    I want to make an HTTP request and then get the response as sketched here: URLConnection c = new URL("http://foo.com").openConnection(); c.setDoOutput(true); /* write an http request here using a new OutputStreamWriter(c.getOutputStream) */ BufferedReader reader = new BufferedReader(new InputStreamReader(c.getInputStream)); reader.readLine(); But my question is, if the request I send takes a long time before a response is received, what happens in the call reader.readLine() above? Will this process stay running/runnable on the CPU or will it get taken off the CPU and be notified to wake up and run again when there is IO to be read? If it stays on the CPU, what can be done to make it get off and be notified later?

    Read the article

  • Inserting nil Values into Sqlite Database?

    - by Chris
    I am working on an iPhone App where I am pulling data from an XML file and inserting it into a sqlite database located in the App. I am able to successfully do this process, but it appears that if I try to sqlite3_bind_text with a NSString that has a value of "nil", the App quickly dies. This is an example of code that fails: (modified for this example) // idvar = 1 // valuevar = nil const char *sqlStatement = "insert into mytable (id, value) VALUES(?, ?)"; sqlite3_stmt *compiledStatement = nil; sqlite3_prepare_v2(database, sqlStatement, -1, &compiledStatement, NULL); sqlite3_bind_int(compiledStatement, 1, [idvar intValue]);; sqlite3_bind_text(compiledStatement, 2, [valuevar UTF8String], -1, SQLITE_TRANSIENT);

    Read the article

  • eclipse servlet problem

    - by raxvan
    Hello, i have created a dynamic web page for a servlet. When i try run the project i get the following error: http starus 500 javax.servlet.ServletException: Error instantiating servlet class ch.uzh.ifi.attempto.aceeditor.MyMainServlet org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:117) org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:174) org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:873) org.apache.coyote.http11.Http11BaseProtocol$Http11ConnectionHandler.processConnection(Http11BaseProtocol.java:665) org.apache.tomcat.util.net.PoolTcpEndpoint.processSocket(PoolTcpEndpoint.java:528) org.apache.tomcat.util.net.LeaderFollowerWorkerThread.runIt(LeaderFollowerWorkerThread.java:81) org.apache.tomcat.util.threads.ThreadPool$ControlRunnable.run(ThreadPool.java:689) java.lang.Thread.run(Unknown Source) how can i fix this?

    Read the article

  • file upload in existing site with jquery and codeigniter

    - by Sam
    I am a new bee to ajax world. I am working on a site which uses jquery and codeigniter which processes big files like 2GB. It basically parses file and stores some extracted data from it, it uses ajax to show how far the files have been processed etc. Now I want to change the way we process files. I want to first store the file on server side and then start the processing. I evaluated upload class of codeigniter but looks like I cannot use it for this purpose as this class works with field_name and I could not find a way to make an ajax call to the upload class. My questions is : What would be suitable for my problem? Thanks in advance, Sam

    Read the article

  • recommendations for rich scalable internet application

    - by Wouter Roux
    I am planning to develop a web application that must perform the following actions: User authentication allow authenticated user to download data from USB device (roughly 5 meg data) upload the data from the USB device to processing server process the data and display the results to the user further requirements/restrictions: the USB driver supports windows (2000, XP, Vista, 7) the application must support IE, Firefox and Chrome the USB driver must be installed when pointing to the web application the first time the USB driver exposes functionality through exported functions in dll scalability and eye candy is important server details: Windows Server 2008 Enterprise x64, IIS 7.0, SQL server 2008 With the limited details specified: What technology would you recommend? (eg silverlight/asp.net mvc/wcf). What practices/patterns/3rd party controls would you recommend?

    Read the article

  • how do i claim a low-numbered port as non-root the "right way"

    - by qbxk
    I have a script that I want to run as a daemon listening on a low-numbered port (< 1024) Script is in python, though answers in perl are also acceptable. The script is being daemonized using start-stop-daemon in a startup script, which may complicate the answer What I really (think) don't want is to type ps -few and see this process running with a "root" on it's line. How do I do it? ( from my less-than-fully-educated-about-system-calls perspective, I can see 3 avenues, Run the script as root (no --user/--group/--chuid to start-stop-daemon), and have it de-escalate it's user after it claims the port Setuid root on the script (chmod u+s), and run the script as the running user, (via --user/--group/--chuid to start-stop-daemon, the startup script still has to be called as root), in the script, acquire root privileges, claim the port, and then revert back to normal user something else i'm unaware of )

    Read the article

  • How do I get the ID or reference of the listview row I am clicking a checkbox in?

    - by danielea
    I have an ASP.NET 2.0 ListView control (aka:parent) and configured inside this ListView I have another ListView (aka:child). For each row the parent has there is potentially a child ListView control which can have 1-3 rows. Each row has two checkboxes (a select checkbox and a deny checkbox). I need to process these checkboxes in JavaScript so that if one select is chosen on any of the rows all other select checkboxes are unchecked AND the deny checkbox for that row only is unchecked. The rows which were NOT selected CAN have the deny checkboxes checked. What is the best approach to this?

    Read the article

  • Joomla, drupal or dotnetnuke?

    - by hovercraft2x
    I'm in the process of starting my own podcasting website. I'm trying to find a good cms to use. At work I'm a .net developer and we are starting to use DNN for some small projects around the office. I'd like the new business to be fully open source but it might be nice to use Dnn cause I will already have some experience with it from work. I'm worried that I'll be spreading myself too thin learning php/Lamp at home and .net at work. What does everyone recommend?

    Read the article

  • Design Patterns: What's the antithesis of Front Controller?

    - by Brian Lacy
    I'm familiar with the Front Controller pattern, in which all events/requests are processed through a single centralized controller. But what would you call it when you wish to keep the various parts of an application separate at the presentation layer as well? My first thought was "Facade" but it turns out that's something entirely different. In my particular case, I'm converting an application from a sprawling procedural mess to a clean MVC architecture, but it's a long-term process -- we need to keep things separated as much as possible to facilitate a slow integration with the rest of the system. Our application is web-based, built in PHP, so for instance, we have an "index.php" and an IndexController, a "account.php" and an AccountController, a "dashboard.php" and DashboardController, and so on.

    Read the article

  • handle user logoff or machine shutdown requests on WindowsME

    - by skylap
    I have to write a C# application that runs on WindowsME. Yes, I mean that Microsoft operating system that has been forgotten a long long time ago. My program needs no user interaction and as WindowsME doesn't support services, it will be a console application. Furthermore it will be used on more modern operating systems, where the user can choose whether to start it as console application or install it as a windows service. Now suppose the software is running on WinME and the user decides to logoff or shutdown the machine without a prior quit of my software. WinME complains about my program still running and asks if it should kill the process. Apart from the bad user experiance, this means that the application is not shut down properly. So I look for a way to be informed if the user logs off or wants to shutdown the machine to be able to perform a proper shutdown of my software first.

    Read the article

  • How do I get a Mac ".command" file to automatically quit after running a shell script?

    - by LOlliffe
    In my shell script, my last lines are: ... echo "$l" done done exit I have Terminal preference set to "When the shell exits: Close the window". In all other cases, when I type "exit" or "logout", in Terminal, the window closes, but for this ".command" file (I can double-click on my shell script file, and the script runs), instead of closing the window, while the file's code says "exit", what shows on the screen is: ... $l done logout [Process completed] ...and the window remains open. Does anyone know how to get a shell script to run, and then just automatically quit the Terminal window on completion? Thanks!

    Read the article

  • Easily switching ConnectionStrings on publish to Azure

    - by David Pfeffer
    I'm currently building an Azure Web Role. I am testing this project against a local database server on localhost. Then, when confident that the project is working, I publish it to Staging on Windows Azure. However, I also have to remember to change the connection string to point to the live SQL server on SQL Azure before deploying, and then change it back to localhost afterwards. Is there any nice way to automate this, or perhaps a different process to take to avoid the issue altogether? For example is there a way to have a configuration file for Azure that isn't updated with every deploy?

    Read the article

  • Adding file of the same name into the same list and unable to update name of the SPFile

    - by BeraCim
    Hi all: I'm having difficulties adding file of the same name in the same list and subsequently updating/changing/modifying the name of a SPFile. Basically, this is what I'm trying to do: string fileName = "something"; // obtained from a loop -- loop omitted here. SPFile file = folder.Files.Add(fileName, otherFile.OpenBinary()); The Add method will generate a runtime exception when it finds another file of the same name in the same list/folder. So I thought of changing the file name later in the process: string newGuid = Guid.NewGuid().ToString(); SPFile file = folder.Files.Add(newGuid, otherFile.OpenBinary()); // some other processing... afterwards, rename the file file.name = fileName; file.Item.Update(); A few minute of googling indicated I need to either move the file around, or update the name field by using ["Name"] instead. I was wondering are there any other better ways to get around this problem? Thanks.

    Read the article

  • .NET Assembly Diff / Compare Tool - What's available?

    - by STW
    I'd like to be able to do a code-level diff between two assemblies; the Diff plug-in for Reflector is the closest thing I've found so far, but to compare the entire assembly is a manual process requiring me to drill-down into every namespace/class/method. The other tools I've found so far appear to be limited to API-level (namespaces, classes, methods) differences--which won't cut it for what I'm looking for. Does anyone know of such a tool? My requirements (from highest to lowest) are: Be able to analyze / reflect the code content of two versions of the same assembly and report the differences Accept a folder or group of assemblies as input; quickly compare them (similar to WinMerge's folder diff's) Quick ability to determine if two assemblies are equivalent at the code level (not just the API's) Allow easy drill-down to view the differences Exporting of reports regarding the differences (Personally I like WinMerge for text diffs, so an application with a similar interface would be great)

    Read the article

  • How should I handle incomplete packet buffers?

    - by Benjamin Manns
    I am writing a client for a server that typically sends data as strings in 500 or less bytes. However, the data will occasionally exceed that, and a single set of data could contain 200,000 bytes, for all the client knows (on initialization or significant events). However, I would like to not have to have each client running with a 50 MB socket buffer (if it's even possible). Each set of data is delimited by a null \0 character. What kind of structure should I look at for storing partially sent data sets? For example, the server may send ABCDEFGHIJKLMNOPQRSTUV\0WXYZ\0123!\0. I would want to process ABCDEFGHIJKLMNOPQRSTUV, WXYZ, and 123! independently. Also, the server could send ABCDEFGHIJKLMNOPQRSTUVWXYZ1234567890LOL123HAHATHISISREALLYLONG without the terminating character. I would want that data set stored somewhere for later appending and processing. Also, I'm using asynchronous socket methods (BeginSend, EndSend, BeginReceive, EndReceive) if that matters.

    Read the article

  • .NET Code Evolution

    - by Alois Kraus
    Originally posted on: http://geekswithblogs.net/akraus1/archive/2013/07/24/153504.aspxAt my day job I do look at a lot of code written by other people. Most of the code is quite good and some is even a masterpiece. And there is also code which makes you think WTF… oh it was written by me. Hm not so bad after all. There are many excuses reasons for bad code. Most often it is time pressure followed by not enough ambition (who cares) or insufficient training. Normally I do care about code quality quite a lot which makes me a (perceived) slow worker who does write many tests and refines the code quite a lot because of the design deficiencies. Most of the deficiencies I do find by putting my design under stress while checking for invariants. It does also help a lot to step into the code with a debugger (sometimes also Windbg). I do this much more often when my tests are red. That way I do get a much better understanding what my code really does and not what I think it should be doing. This time I do want to show you how code can evolve over the years with different .NET Framework versions. Once there was  time where .NET 1.1 was new and many C++ programmers did switch over to get rid of not initialized pointers and memory leaks. There were also nice new data structures available such as the Hashtable which is fast lookup table with O(1) time complexity. All was good and much code was written since then. At 2005 a new version of the .NET Framework did arrive which did bring many new things like generics and new data structures. The “old” fashioned way of Hashtable were coming to an end and everyone used the new Dictionary<xx,xx> type instead which was type safe and faster because the object to type conversion (aka boxing) was no longer necessary. I think 95% of all Hashtables and dictionaries use string as key. Often it is convenient to ignore casing to make it easy to look up values which the user did enter. An often followed route is to convert the string to upper case before putting it into the Hashtable. Hashtable Table = new Hashtable(); void Add(string key, string value) { Table.Add(key.ToUpper(), value); } This is valid and working code but it has problems. First we can pass to the Hashtable a custom IEqualityComparer to do the string matching case insensitive. Second we can switch over to the now also old Dictionary type to become a little faster and we can keep the the original keys (not upper cased) in the dictionary. Dictionary<string, string> DictTable = new Dictionary<string, string>(StringComparer.OrdinalIgnoreCase); void AddDict(string key, string value) { DictTable.Add(key, value); } Many people do not user the other ctors of Dictionary because they do shy away from the overhead of writing their own comparer. They do not know that .NET has for strings already predefined comparers at hand which you can directly use. Today in the many core area we do use threads all over the place. Sometimes things break in subtle ways but most of the time it is sufficient to place a lock around the offender. Threading has become so mainstream that it may sound weird that in the year 2000 some guy got a huge incentive for the idea to reduce the time to process calibration data from 12 hours to 6 hours by using two threads on a dual core machine. Threading does make it easy to become faster at the expense of correctness. Correct and scalable multithreading can be arbitrarily hard to achieve depending on the problem you are trying to solve. Lets suppose we want to process millions of items with two threads and count the processed items processed by all threads. A typical beginners code might look like this: int Counter; void IJustLearnedToUseThreads() { var t1 = new Thread(ThreadWorkMethod); t1.Start(); var t2 = new Thread(ThreadWorkMethod); t2.Start(); t1.Join(); t2.Join(); if (Counter != 2 * Increments) throw new Exception("Hmm " + Counter + " != " + 2 * Increments); } const int Increments = 10 * 1000 * 1000; void ThreadWorkMethod() { for (int i = 0; i < Increments; i++) { Counter++; } } It does throw an exception with the message e.g. “Hmm 10.222.287 != 20.000.000” and does never finish. The code does fail because the assumption that Counter++ is an atomic operation is wrong. The ++ operator is just a shortcut for Counter = Counter + 1 This does involve reading the counter from a memory location into the CPU, incrementing value on the CPU and writing the new value back to the memory location. When we do look at the generated assembly code we will see only inc dword ptr [ecx+10h] which is only one instruction. Yes it is one instruction but it is not atomic. All modern CPUs have several layers of caches (L1,L2,L3) which try to hide the fact how slow actual main memory accesses are. Since cache is just another word for redundant copy it can happen that one CPU does read a value from main memory into the cache, modifies it and write it back to the main memory. The problem is that at least the L1 cache is not shared between CPUs so it can happen that one CPU does make changes to values which did change in meantime in the main memory. From the exception you can see we did increment the value 20 million times but half of the changes were lost because we did overwrite the already changed value from the other thread. This is a very common case and people do learn to protect their  data with proper locking.   void Intermediate() { var time = Stopwatch.StartNew(); Action acc = ThreadWorkMethod_Intermediate; var ar1 = acc.BeginInvoke(null, null); var ar2 = acc.BeginInvoke(null, null); ar1.AsyncWaitHandle.WaitOne(); ar2.AsyncWaitHandle.WaitOne(); if (Counter != 2 * Increments) throw new Exception(String.Format("Hmm {0:N0} != {1:N0}", Counter, 2 * Increments)); Console.WriteLine("Intermediate did take: {0:F1}s", time.Elapsed.TotalSeconds); } void ThreadWorkMethod_Intermediate() { for (int i = 0; i < Increments; i++) { lock (this) { Counter++; } } } This is better and does use the .NET Threadpool to get rid of manual thread management. It does give the expected result but it can result in deadlocks because you do lock on this. This is in general a bad idea since it can lead to deadlocks when other threads use your class instance as lock object. It is therefore recommended to create a private object as lock object to ensure that nobody else can lock your lock object. When you read more about threading you will read about lock free algorithms. They are nice and can improve performance quite a lot but you need to pay close attention to the CLR memory model. It does make quite weak guarantees in general but it can still work because your CPU architecture does give you more invariants than the CLR memory model. For a simple counter there is an easy lock free alternative present with the Interlocked class in .NET. As a general rule you should not try to write lock free algos since most likely you will fail to get it right on all CPU architectures. void Experienced() { var time = Stopwatch.StartNew(); Task t1 = Task.Factory.StartNew(ThreadWorkMethod_Experienced); Task t2 = Task.Factory.StartNew(ThreadWorkMethod_Experienced); t1.Wait(); t2.Wait(); if (Counter != 2 * Increments) throw new Exception(String.Format("Hmm {0:N0} != {1:N0}", Counter, 2 * Increments)); Console.WriteLine("Experienced did take: {0:F1}s", time.Elapsed.TotalSeconds); } void ThreadWorkMethod_Experienced() { for (int i = 0; i < Increments; i++) { Interlocked.Increment(ref Counter); } } Since time does move forward we do not use threads explicitly anymore but the much nicer Task abstraction which was introduced with .NET 4 at 2010. It is educational to look at the generated assembly code. The Interlocked.Increment method must be called which does wondrous things right? Lets see: lock inc dword ptr [eax] The first thing to note that there is no method call at all. Why? Because the JIT compiler does know very well about CPU intrinsic functions. Atomic operations which do lock the memory bus to prevent other processors to read stale values are such things. Second: This is the same increment call prefixed with a lock instruction. The only reason for the existence of the Interlocked class is that the JIT compiler can compile it to the matching CPU intrinsic functions which can not only increment by one but can also do an add, exchange and a combined compare and exchange operation. But be warned that the correct usage of its methods can be tricky. If you try to be clever and look a the generated IL code and try to reason about its efficiency you will fail. Only the generated machine code counts. Is this the best code we can write? Perhaps. It is nice and clean. But can we make it any faster? Lets see how good we are doing currently. Level Time in s IJustLearnedToUseThreads Flawed Code Intermediate 1,5 (lock) Experienced 0,3 (Interlocked.Increment) Master 0,1 (1,0 for int[2]) That lock free thing is really a nice thing. But if you read more about CPU cache, cache coherency, false sharing you can do even better. int[] Counters = new int[12]; // Cache line size is 64 bytes on my machine with an 8 way associative cache try for yourself e.g. 64 on more modern CPUs void Master() { var time = Stopwatch.StartNew(); Task t1 = Task.Factory.StartNew(ThreadWorkMethod_Master, 0); Task t2 = Task.Factory.StartNew(ThreadWorkMethod_Master, Counters.Length - 1); t1.Wait(); t2.Wait(); Counter = Counters[0] + Counters[Counters.Length - 1]; if (Counter != 2 * Increments) throw new Exception(String.Format("Hmm {0:N0} != {1:N0}", Counter, 2 * Increments)); Console.WriteLine("Master did take: {0:F1}s", time.Elapsed.TotalSeconds); } void ThreadWorkMethod_Master(object number) { int index = (int) number; for (int i = 0; i < Increments; i++) { Counters[index]++; } } The key insight here is to use for each core its own value. But if you simply use simply an integer array of two items, one for each core and add the items at the end you will be much slower than the lock free version (factor 3). Each CPU core has its own cache line size which is something in the range of 16-256 bytes. When you do access a value from one location the CPU does not only fetch one value from main memory but a complete cache line (e.g. 16 bytes). This means that you do not pay for the next 15 bytes when you access them. This can lead to dramatic performance improvements and non obvious code which is faster although it does have many more memory reads than another algorithm. So what have we done here? We have started with correct code but it was lacking knowledge how to use the .NET Base Class Libraries optimally. Then we did try to get fancy and used threads for the first time and failed. Our next try was better but it still had non obvious issues (lock object exposed to the outside). Knowledge has increased further and we have found a lock free version of our counter which is a nice and clean way which is a perfectly valid solution. The last example is only here to show you how you can get most out of threading by paying close attention to your used data structures and CPU cache coherency. Although we are working in a virtual execution environment in a high level language with automatic memory management it does pay off to know the details down to the assembly level. Only if you continue to learn and to dig deeper you can come up with solutions no one else was even considering. I have studied particle physics which does help at the digging deeper part. Have you ever tried to solve Quantum Chromodynamics equations? Compared to that the rest must be easy ;-). Although I am no longer working in the Science field I take pride in discovering non obvious things. This can be a very hard to find bug or a new way to restructure data to make something 10 times faster. Now I need to get some sleep ….

    Read the article

  • Top techniques to avoid 'data scraping' from a website database

    - by Addsy
    I am setting up a site using PHP and MySQL that is essentially just a web front-end to an existing database. Understandably my client is very keen to prevent anyone from being able to make a copy of the data in the database yet at the same time wants everything publicly available and even a "view all" link to display every record in the db. Whilst I have put everything in place to prevent attacks such as SQL injection attacks, there is nothing to prevent anyone from viewing all the records as html and running some sort of script to parse this data back into another database. Even if I was to remove the "view all" link, someone could still, in theory, use an automated process to go through each record one by one and compile these into a new database, essentially pinching all the information. Does anyone have any good tactics for preventing or even just dettering this that they could share. Thanks

    Read the article

  • I cannot connect to SQL Server Express using VB.NET

    - by konin
    Can someone tell me what I am missing? I am using this connection string to connect to my database and still it won't connect: Dim str As String = "Provider = .NET Framework Data Provider for SQL Server; Data Source=C:\Users\konin\Documents\UHMS\bin\Debug\UHMS.mdf;Integrated Security=True;Connect Timeout=30;User Instance=True" This is the process I used to get the data source: right-click the database select properties and click select data source I hope I am clear enough. Thanks for reading. Edit: Error Message is as follows: unable to connect to database please contact administrator

    Read the article

  • C# SqlDataAdapter not populating DataSet

    - by Wesley
    I have searched the net and searched the net only to not quite find the probably I am running into. I am currently having an issue getting a SqlDataAdapter to populate a DataSet. I am running Visual Studio 2008 and the query is being sent to a local instance of SqlServer 2008. If I run the query itself in SqlServer, I do get information. Code is as follows: string theQuery = "select Password from Employees where employee_ID = '@EmplID'"; SqlDataAdapter theDataAdapter = new SqlDataAdapter(); theDataAdapter.SelectCommand = new SqlCommand(theQuery, conn); theDataAdapter.SelectCommand.Parameters.Add("@EmplID", SqlDbType.VarChar).Value = "EmployeeName"; theDataAdapter.Fill(theSet); The code to read the dataset: foreach (DataRow theRow in theSet.Tables[0].Rows) { //process row info } If there is any more info I can supply please let me know.

    Read the article

  • RenderAction in an HtmlHelperExtension Method?

    - by TimLeung
    I am trying to call the RenderAction Extension Method within my own Html Helper: System.Web.Mvc.Html.ChildActionExtensions.RenderAction(helper, "account", "login"); this is so that along with some additional logic, I would like all html helpers to use a common method name structure when calling it on the view: <%= Html.CompanyName().RenderAccount() %> but the problem I am having is that, asp.net will complain about not finding the actual route it needs to process. It does not take in the parameters of "controller" to be used as the action and "login" to be used as the action. It seems to only reference the current route. Any ideas how I can package up the RenderAction?

    Read the article

  • Monolog conversations in SQL Service Broker 2008

    - by hemil
    Hi, I have a scenario in which I need to process(in SQL Server) messages being delivered as .xml files in a folder in real time. I started investigating SQL Service Broker for my queuing needs. Basically, I want the Service Broker to pick up my .xml files and place them in a queue as they arrive in the folder. But, SQL Service Broker does not support "Monolog" conversations, at least not in the current version. It supports only a dialog between an initiator and a target service. I can use MSMQ but then I will have two things to maintain - the .Net Code for file processing in MSMQ and the SQL Server T-SQL stored procs. What options do I have left? Thanks.

    Read the article

  • Create a SOCKS Proxy that does nothing special

    - by rwired
    I am trying to create a SOCKS proxy in C++ that runs as a background process on localhost. If the user's browser is configured to use the proxy, I want all HTTP requests to be passed along through the normal TCP/IP stack. i.e. The browser will behave exactly as it normally would. Eventually I will add another layer which will check to see if the requested resource matches certain criteria, and if so will handle the request differently. But for now I'm just trying to solve the basic problem... how to create a SOCKS proxy that doesn't change anything?

    Read the article

< Previous Page | 597 598 599 600 601 602 603 604 605 606 607 608  | Next Page >