Search Results

Search found 20087 results on 804 pages for 'css3 multiple backgrounds'.

Page 352/804 | < Previous Page | 348 349 350 351 352 353 354 355 356 357 358 359  | Next Page >

  • Preventing concurrent data modification in Web based app

    - by Manoj
    Hi, I am building a web app in Silverlight which allows users to view and edit a database. In order to prevent multiple users from editing the same data, I was thinking of implementing a lock and key mechanism, so that other users are made to wait when one particular user is editing the data. Is there any way in which we can have variable(flag specifying if a user is editing data) in the server which can be shared across multiple clients? Is there a better way to manage this type of conurrent data access issues?

    Read the article

  • How do I implement .net plugins without using AppDomains?

    - by Abtin Forouzandeh
    Problem statement: Implement a plug-in system that allows the associated assemblies to be overwritten (avoid file locking). In .Net, specific assemblies may not be unloaded, only entire AppDomains may be unloaded. I'm posting this because when I was trying to solve the problem, every solution made reference to using multiple AppDomains. Multiple AppDomains are very hard to implement correctly, even when architected at the start of a project. Also, AppDomains didn't work for me because I needed to transfer Type across domains as a setting for Speech Server worfklow's InvokeWorkflow activity. Unfortunately, sending a type across domains causes the assembly to be injected into the local AppDomain. Also, this is relevant to IIS. IIS has a Shadow Copy setting that allows an executing assembly to be overwritten while its loaded into memory. The problem is that (at least under XP, didnt test on production 2003 servers) when you programmatically load an assembly, the shadow copy doesnt work (because you are loading the DLL, not IIS).

    Read the article

  • Select in a many-to-many relationship in MySQL

    - by Joff Williams
    I have two tables in a MySQL database, Locations and Tags, and a third table LocationsTagsAssoc which associates the two tables and treats them as a many-to-many relationship. Table structure is as follows: Locations --------- ID int (Primary Key) Name varchar(128) LocationsTagsAssoc ------------------ ID int (Primary Key) LocationID int (Foreign Key) TagID int (Foreign Key) Tags ---- ID int (Primary Key) Name varchar(128) So each location can be tagged with multiple tagwords, and each tagword can be tagged to multiple locations. What I want to do is select only Locations which are tagged with all of the tag names supplied. For example: I want all locations which are tagged with both "trees" and "swings". Location "Park" should be selected, but location "Forest" should not. Any insight would be appreciated. Thanks!

    Read the article

  • Downloading HTTP URLs asynchronously in C++

    - by Joey Adams
    What's a good way to download HTTP URLs (e.g. such as http://0.0.0.0/foo.htm ) in C++ on Linux ? I strongly prefer something asynchronous. My program will have an event loop that repeatedly initiates multiple (very small) downloads and acts on them when they finish (either by polling or being notified somehow). I would rather not have to spawn multiple threads/processes to accomplish this. That shouldn't be necessary. Should I look into libraries like libcurl? I suppose I could implement it manually with non-blocking TCP sockets and select() calls, but that would likely be less convenient.

    Read the article

  • Get current updated column name to use in a trigger

    - by Serge
    Is there a way to actually get the column name that was updated in order to use it in a trigger? Basically I'm trying to have an audit trail whenever a user inserts or updates a table (in this case it has to do with a Contact table) CREATE TRIGGER `after_update_contact` AFTER UPDATE ON `contact` FOR EACH ROW BEGIN INSERT INTO user_audit (id_user, even_date, table_name, record_id, field_name, old_value, new_value) VALUES (NEW.updatedby, NEW.lastUpdate, 'contact', NEW.id_contact, [...]) END How can I get the name of the column that's been updated and from that get the OLD and NEW values of that column. If multiple columns have been updated in a row or even multiple rows would it be possible to have an audit for each update?

    Read the article

  • Castle Windsor Controller Factory and RenderAction

    - by Bradley Landis
    I am running into an issue when using my Castle Windsor Controller Factory with the new RenderAction method. I get the following error message: A single instance of controller 'MyController' cannot be used to handle multiple requests. If a custom controller factory is in use, make sure that it creates a new instance of the controller for each request. This is the code in my controller factory: public class CastleWindsorControllerFactory : DefaultControllerFactory { private IWindsorContainer container; public CastleWindsorControllerFactory(IWindsorContainer container) { this.container = container; } public override IController CreateController(RequestContext requestContext, string controllerName) { return container.Resolve(controllerName) as IController; } public override void ReleaseController(IController controller) { this.container.Release(controller); } } Does anyone know what changes I need to make to make it work with RenderAction? I also find the error message slightly strange because it talks about multiple requests, but from what I can tell RenderAction doesn't actually create another request (BeginRequest isn't fired again).

    Read the article

  • Subversion (SVN) equivalant to Visual Source Safe (VSS) "Share"

    - by CraftyFella
    Hi, I have a scenario in my project where I need to share a single file between multiple projects in the same solution. Back in my Visual Source Safe days (Shudder), I'd use the "Share" option to allow me to make changes to this file in any of the locations. Then once it was checked in I could guarantee that the other locations will get the update. I'm trying to do this in Subversion but I can't seem to find the option anywhere. I do know about svn:externals however I'm only interested in sharing a single file between multiple locations. Does anyone know how to do this in Subversion? Thanks

    Read the article

  • NSArray vs. SQLite for Complex Queries on iPhone

    - by GingerBreadMane
    Developing for iPhone, I have a collection of points that I need to make complex queries on. For example: "How many points have a y-coordinate of 10" and "Return all points with an X-coordinate between 3 and 5 and a y-coordinate of 7". Currently, I am just cycling through each element of an NSArray and checking to see if each element matches my query. It's a pain to write the queries though. SQLite would be much nicer. I'm not sure which would be more efficient though since a SQLite database resides on disk and not in memory (to my understanding). Would SQLite be as efficient or more efficient here? Or is there a better way to do it other than these methods that I haven't thought of? I would need to perform the multiple queries with multiple sets of points thousands of times, so the best performance is important.

    Read the article

  • Literate Haskell (.lhs) and Haddock

    - by finnsson
    At the moment I'm only using Haddock but after seeing some really interesting examples (e.g. this gist ) of literate Haskell I'm interested in trying it out in a project. The questions I got are: What do you write as Haddock comments and what do you write in the literate part? How do you scale literate programming to multiple files? Can anyone point me to an example where literate programming is used in a package with multiple modules? What is your experience of using literate programming in larger packages? Which flavour (markdown, latex, ...) of literate Haskell is preferred? Why are you programming in literate Haskell or plain vanilla Haskell? Are you programming in both styles and if so why? Do you prefer block-style (\begin{code}) or Bird-style (>)? Why?

    Read the article

  • nodejs async.waterfall method

    - by user1513388
    Update 2 Complete code listing var request = require('request'); var cache = require('memory-cache'); var async = require('async'); var server = '172.16.221.190' var user = 'admin' var password ='Passw0rd' var dn ='\\VE\\Policy\\Objects' var jsonpayload = {"Username": user, "Password": password} async.waterfall([ //Get the API Key function(callback){ request.post({uri: 'http://' + server +'/sdk/authorize/', json: jsonpayload, headers: {'content_type': 'application/json'} }, function (e, r, body) { callback(null, body.APIKey); }) }, //List the credential objects function(apikey, callback){ var jsonpayload2 = {"ObjectDN": dn, "Recursive": true} request.post({uri: 'http://' + server +'/sdk/Config/enumerate?apikey=' + apikey, json: jsonpayload2, headers: {'content_type': 'application/json'} }, function (e, r, body) { var dns = []; for (var i = 0; i < body.Objects.length; i++) { dns.push({'name': body.Objects[i].Name, 'dn': body.Objects[i].DN}) } callback(null, dns, apikey); }) }, function(dns, apikey, callback){ // console.log(dns) var cb = []; for (var i = 0; i < dns.length; i++) { //Retrieve the credential var jsonpayload3 = {"CredentialPath": dns[i].dn, "Pattern": null, "Recursive": false} console.log(dns[i].dn) request.post({uri: 'http://' + server +'/sdk/credentials/retrieve?apikey=' + apikey, json: jsonpayload3, headers: {'content_type': 'application/json'} }, function (e, r, body) { // console.log(body) cb.push({'cl': body.Classname}) callback(null, cb, apikey); console.log(cb) }); } } ], function (err, result) { // console.log(result) // result now equals 'done' }); Update: I'm building a small application that needs to make multiple HTTP calls to a an external API and amalgamates the results into a single object or array. e.g. Connect to endpoint and get auth key - pass auth key to step 2 Connect to endpoint using auth key and get JSON results - create an object containing summary results and pass to step 3. Iterate over passed object summary results and call API for each item in the object to get detailed information for each summary line Create a single JSON data structure that contains the summary and detail information. The original question below outlines what I've tried so far! Original Question: Will the async.waterfall method support multiple callbacks? i.e. Iterate over an array thats passed from a previous item in the chain, then invoke multiple http requests each of which would have their own callbacks. e.g, sync.waterfall([ function(dns, key, callback){ var cb = []; for (var i = 0; i < dns.length; i++) { //Retrieve the credential var jsonpayload3 = {"Cred": dns[i].DN, "Pattern": null, "Recursive": false} console.log(dns[i].DN) request.post({uri: 'http://' + vedserver +'/api/cred/retrieve?apikey=' + key, json: jsonpayload3, headers: {'content_type': 'application/json'} }, function (e, r, body) { console.log(body) cb.push({'cl': body.Classname}) callback(null, cb, key); }); } }

    Read the article

  • How can I generate a unique ID using a hash in Perl?

    - by sganesh
    I doing message transfer program between multiple clients and server. I want to generate unique message id for every messages. that should be generated by server and return to client. For message transfer I am using hash data structure, Ex: { api => POST, username => sganesh, pass => "pass", message => "hai", time => "current_time", } I want to generate unique id using this hash. I have tried some of the ways, MD5 and freeze but this give unreadable id. I want some meaningful or readable unique id. I have thought we can use micro seconds to differentiate the id but here the problem is multiple clients. In any situation my id should be unique. Can anyone help me out of this problem? Thanks in Advance.

    Read the article

  • SVN Feature Branch Method

    - by Seth
    I am getting a SVN server setup and will be using the feature branch method. I plan on having 1+ branches making up a release tag. How do I merge (?) multiple branches into the release tag, while still maintaining diffs and such? I've given an example of our workflow below. Multiple devs pull to local Create feature branch Commit to branch Use branch to build QA (Here is where my question starts) I need to have all the branches for the next build to be put into a build tag to be used to build Production

    Read the article

  • openmp vs opencl for computer vision

    - by user1235711
    I am creating a computer vision application that detect objects via a web camera. I am currently focusing on the performance of the application My problem is in a part of the application that generates the XML cascade file using Haartraining file. This is very slow and takes about 6days . To get around this problem I decided to use multiprocessing, to minimize the total time to generate Haartraining XML file. I found two solutions: opencl and (openMp and openMPI ) . Now I'm confused about which one to use. I read that opencl is to use multiple cpu and GPU but on the same machine. Is that so? On the other hand OpenMP is for multi-processing and using openmpi we can use multiple CPUs over the network. But OpenMP has no GPU support. Can you please suggest the pros and cons of using either of the libraries.

    Read the article

  • While in CMD shell, copying files from host OS to guest VM locks files (VMware Player/Workstation)

    - by Malcolm
    We're running the latest versions of VMWare Player and Workstation for Windows. The following behavior is identical across both products. Problem: We open a CMD prompt in our guest OS (XP, Vista, Windows 7) and copy files from our host OS using the standard CMD shell copy command: copy z:\C$\testfiles The copy completes successfully, but from that point forward, all the files that were copied to our guest OS are now LOCKED on our host OS. This does not happen if we use Windows Explorer to copy files - it only happens when files are copied via the CMD shell. As mentioned at the start of this question, this behavior is reproducible in both VMWare Player and VMWare Workstation across multiple machines and multiple guest OS's. I've googled for a workaround, but without success. Any ideas appreciated. Malcolm

    Read the article

  • A 'do' statement at the end of my perl script never runs

    - by Jeremy Petzold
    In my main script, I am doing some archive manipulation. Once I have completed that, I want to run a separate script to upload my archives to and FTP server. Separately, these scripts work well. I want to add the FTP script to the end of my archive script so I only need to worry about scheduling one script to run and I want to guarantee that the first script completes it work before the FTP script is called. After looking at all the different methods to call my FTP script, I settled on 'do', however, when my do statement is at the end of the script, it never runs. When I place it in my main foreach loop, it runs fine, but it runs multiple times which I want to avoid since the FTP script can handle having multiple archives to upload. Is there something I am missing? Why does it not run? Thanks

    Read the article

  • Could someone give me their two cents on this optimization strategy

    - by jimstandard
    Background: I am writing a matching script in python that will match records of a transaction in one database to names of customers in another database. The complexity is that names are not unique and can be represented multiple different ways from transaction to transaction. Rather than doing multiple queries on the database (which is pretty slow) would it be faster to get all of the records where the last name (which in this case we will say never changes) is "Smith" and then have all of those records loaded into memory as you go though each looking for matches for a specific "John Smith" using various data points. Would this be faster, is it feasible in python, and if so does anyone have any recommendations for how to do it?

    Read the article

  • Design Stock Server Application in C++

    - by Avinash
    [Queston asked in ML Interview] Design an server/application which handle multiple incoming stock information ( stock symbol and values ) Server need to store the information in some cache ( need to design the data structure ), There are multiple client connected to server using tcp/ip socket. They will subscribe to particular request say stock symbol XYZ , and may be for more then one. As and when there is change the stock symbol server should broadcast the information to subscribed client. If tcp/ip write failed, server should handle the unregistration of the client. What various data structures will be used and how threading model will be ?

    Read the article

  • <thead> and <tfoot> in Safari

    - by AJ
    I trying to print a page with multiple tables. The problem is that any of these tables may break and carry over to the next page. Have been trying to get the table header to repeat on the second page. Am currently using thead and setting the display property to table-header-group. This works just as expected in IE and firefox but the header will not repeat in Safari. Since we are using software that converts our page to a pdf document for printing...and the 3rd party software uses a Safari engine, we are stuck with this problem. Does anyone know a way / workaround to make headers repeat if the table spans multiple pages in Safari?

    Read the article

  • Performing time consuming operation on STL container within a lock

    - by Ashley
    I have an unordered_map of an unordered_map which stores a pointer of objects. The unordered map is being shared by multiple threads. I need to iterate through each object and perform some time consuming operation (like sending it through network etc) . How could I lock the multiple unordered_map so that it won't blocked for too long? typedef std::unordered_map<string, classA*>MAP1; typedef std::unordered_map<int, MAP1*>MAP2; MAP2 map2; pthread_mutex_lock(&mutexA) //how could I lock the maps? Could I reduce the lock granularity? for(MAP2::iterator it2 = map2.begin; it2 != map2.end; it2++) { for(MAP1::iterator it1 = *(it2->second).begin(); it1 != *(it2->second).end(); it1++) { //perform some time consuming operation on it1->second eg sendToNetwork(*(it1->second)); } } pthread_mutex_unlock(&mutexA)

    Read the article

  • MySQL Locks: order of unblocked threads

    - by teehoo
    I have a MySQL ISAM table being accessed my multiple php instances. Right now I'm using a WRITE lock to serialize access to this table. My question is how do I ensure that the PHP instances get served on a First-Come-First-Serve basis? Or is this the default behaviour? The official MySQL documentation doesn't mention anything about the blocked thread order for threads of the same lock type (ie multiple threads attempting a WRITE LOCK). It only mentions that a WRITER will jump to the front of the waiting queue if READERS are waiting.

    Read the article

  • can anyone explain the why the the 1st example gets different results than the following 2

    - by klumsy
    $b = (2,3) $myarray1 = @(,$b,$b) $myarray1[0].length #this will be 1 $myarray1[1].length $myarray2 = @( ,$b ,$b ) $myarray2[0].length #this will be 2 $myarray[1].length $myarray3 = @(,$b ,$b ) $myarray3[0].length #this will be 2 $myarray3[1].length UPDATE I think on #powershell IRC we have worked it out, Here is another example that demonstrates the danger of breaking with the comma on the following line rather than the top line when listing multiple items in an array over multiple lines. $b = (1..20) $a = @( $b, $b ,$b, $b, $b ,$b) for($i=0;$i -lt $a.length;$i++) { $a[$i].length } "--------" $a = @( $b, $b ,$b ,$b, $b ,$b) for($i=0;$i -lt $a.length;$i++) { $a[$i].length } produces 20 20 20 20 20 20 -------- 20 20 20 1 20 20 I'm curious how people will explain this. I think i understand it now, but would have trouble explaining it in a concise understandable fashion, though the above example goes somewhat towards that goal.

    Read the article

< Previous Page | 348 349 350 351 352 353 354 355 356 357 358 359  | Next Page >