Search Results

Search found 18677 results on 748 pages for 'current'.

Page 569/748 | < Previous Page | 565 566 567 568 569 570 571 572 573 574 575 576  | Next Page >

  • change image upon selection, searching list for the src value jQuery

    - by Charles Marsh
    Hello all, Can anyone see anything that is wrong with this code it just isn't working... Should be clear what I am trying to do jQuery(document).ready(function($) { $('#product-variants-option-0').change(function() { // What is the sku of the current variant selection. var select_value = $(this).find(':selected').val(); if (select_value == "Kelly Green") { var keyword = "kly"; }; var new_src = $('#preload img[src*="kly"]'); $('div.image').attr('src', new_src); }); }); The selection: <select class="single-option-selector-0" id="product-variants-option-0"> <option value="Kelly Green">Kelly Green</option> <option value="Navy">Navy</option> <option value="Olive">Olive</option> <option value="Cocoa">Cocoa</option> </select> I'm trying to search an unordered list: <ul id="preload" style="display:none;"> <li>0z-kelly-green-medium.jpg</li> <li>0z-olive-medium.jpg</li> </ul>

    Read the article

  • ruby gem not found although it is installed

    - by Eimantas
    I found some similar problems here on SO, but none seem to match my case (sorry if I overlooked). Here's my problem: I installed oauth-plugin gem to ruby gems dir, but trying to use it in rails app tells me that it's not being found. Here's the output of relevant commands: Instalation % s gem install oauth-plugin Successfully installed oauth-plugin-0.3.14 1 gem installed Installing ri documentation for oauth-plugin-0.3.14... Installing RDoc documentation for oauth-plugin-0.3.14... gem which oauth-plugin output: % gem which oauth-plugin /usr/lib/ruby/gems/1.8/gems/oauth-plugin-0.3.14/lib/oauth-plugin.rb gem env output: % gem env RubyGems Environment: - RUBYGEMS VERSION: 1.3.6 - RUBY VERSION: 1.8.7 (2009-12-24 patchlevel 248) [i686-darwin10.2.0] - INSTALLATION DIRECTORY: /usr/lib/ruby/gems/1.8 - RUBY EXECUTABLE: /usr/bin/ruby - EXECUTABLE DIRECTORY: /usr/bin - RUBYGEMS PLATFORMS: - ruby - x86-darwin-10 - GEM PATHS: - /usr/lib/ruby/gems/1.8 - /Users/eimantas/.gem/ruby/1.8 - GEM CONFIGURATION: - :update_sources => true - :verbose => true - :benchmark => false - :backtrace => true - :bulk_threshold => 1000 - :gem => ["--no-ri", "--no-rdoc"] - :sources => ["http://gems.ruby.lt/", "http://rubygems.org/"] - REMOTE SOURCES: - http://gems.ruby.lt/ - http://rubygems.org/ Doing ls -l /usr/lib/ruby shows this: % ls -l /usr/lib/ruby lrwxr-xr-x 1 root wheel 76 Aug 14 2009 /usr/lib/ruby -> ../../System/Library/Frameworks/Ruby.framework/Versions/Current/usr/lib/ruby And the gem in question is in intended location. This is not a single gem that is not being found by rubygems (although it's located where it should be). Any guidance towards the solution is much appreciated.

    Read the article

  • How to use Common Table Expression and check no duplication in SQL Server

    - by vodkhang
    I have a table references to itself. User table: id, username, managerid and managerid links back to id Now, I want to get all the managers including direct manager, manager of direct manager, so on and so forth... The problem is that I do not want to have a unstop recursive sql. So, I want to check if an id alreay in a list, I will not include it anymore. Here is my sql for that: with all_managers (id, username, managerid, idlist) as ( select u1.id, u1.username, u1.managerid, ' ' from users u1, users u2 where u1.id = u2.managerid and u2.id = 6 UNION ALL select u.id, u.username, u.managerid, idlist + ' ' + u.id from all_managers a, users u where a.managerid = u.id and charindex(cast(u.id as nvarchar(5)), idlist) != 0 ) select id, username from all_managers; The problem is that in this line: select u1.id, u1.username, u1.managerid, ' ' The SQL Server complains with me that I can not put ' ' as the initialized for idlist. nvarchar(40) does not work as well. I do not know how to declare it inside a common table expression like this one. Usually, in db2, I can just put varchar(40) My sample data: ID UserName ManagerID 1 admin 1 2 a 1 3 b 1 4 c 2 What I want to do is that I want to find all managers of c guy. The result should be: admin, a, b. Some of the user can be his manager (like admin) because the ManagerID does not allow NULL and some does not have direct manager. With common table expression, it can lead to an infinite recursive. So, I am also trying to avoid that situation by trying to not include the id twice. For example, in the 1st iteration, we already have id : 1, so, in the 2nd iteration and later on, 1 should never be allowed. I also want to ask if my current approach is good or not and any other solutions? Because if I have a big database with a deep hierarchy, I will have to initialize a big varchar to keep it and it consumes memory, right?

    Read the article

  • condition in recursion - best practise

    - by mo
    hi there! what's the best practise to break a loop? my ideas were: Child Find(Parent parent, object criteria) { Child child = null; foreach(Child wannabe in parent.Childs) { if (wannabe.Match(criteria)) { child = wannabe; break; } else { child = Find(wannabe, criteria); } } return child; } or Child Find(Parent parent, object criteria) { Child child = null; var conditionator = from c parent.Childs where child != null select c; foreach(Child wannabe in conditionator) { if (wannabe.Match(criteria)) { child = wannabe; } else { child = Find(wannabe, criteria); } } return child; } or Child Find(Parent parent, object criteria) { Child child = null; var enumerator = parent.Childs.GetEnumerator(); while(child != null && enumerator.MoveNext()) { if (enumerator.Current.Match(criteria)) { child = wannabe; } else { child = Find(wannabe, criteria); } } return child; } what do u think, any better ideas? i'm looking for the niciest solution :D mo

    Read the article

  • Graph navigation problem

    - by affan
    I have graph of components and relation between them. User open graph and want to navigate through the graph base on his choice. He start with root node and click expand button which reveal new component that is related to current component. The problem is with when use decide to collapse a node. I have to choose a sub-tree to hide and at same time leave graph in consistent state so that there is no expanded node with missing relation to another node in graph. Now in case of cyclic/loop between component i have difficult of choosing sub-tree. For simplicity i choose the order in which they were expanded. So if a A expand into B and C collapse A will hide the nodes and edge that it has created. Now consider flowing scenario. [-] mean expanded state and [+] mean not yet expanded. A is expanded to reveal B and C. And then B is expanded to reveal D. C is expanded which create a link between C and exiting node D and also create node E. Now user decide to collapse B. Since by order of expansion D is child of B it will collapse and hide D. This leave graph in inconsistent state as C is expanded with edge to D but D is not anymore there if i remove CD edge it will still be inconsistent. If i collapse C. And E is again a cyclic link e.g to B will produce the same problem. /-----B[-]-----\ A[-] D[+] \-----C[-]-----/ \ E[+] So guys any idea how can i solve this problem. User need to navigate through graph and should be able to collapse but i am stuck with problem of cyclic nodes in which case any of node in loop if collapse will leave graph in inconsistent state.

    Read the article

  • How do I read the manifest file for a webapp running in apache tomcat?

    - by Nik Reiman
    I have a webapp which contains a manifest file, in which I write the current version of my application during an ant build task. The manifest file is created correctly, but when I try to read it in during runtime, I get some strange side-effects. My code for reading in the manifest is something like this: InputStream manifestStream = Thread.currentThread() .getContextClassLoader() .getResourceAsStream("META-INFFFF/MANIFEST.MF"); try { Manifest manifest = new Manifest(manifestStream); Attributes attributes = manifest.getMainAttributes(); String impVersion = attributes.getValue("Implementation-Version"); mVersionString = impVersion; } catch(IOException ex) { logger.warn("Error while reading version: " + ex.getMessage()); } When I attach eclipse to tomcat, I see that the above code works, but it seems to get a different manifest file than the one I expected, which I can tell because the ant version and build timestamp are both different. Then, I put "META-INFFFF" in there, and the above code still works! This means that I'm reading some other manifest, not mine. I also tried this.getClass().getClassLoader().getResourceAsStream(...) But the result was the same. What's the proper way to read the manifest file from inside of a webapp running in tomcat? Edit: Thanks for the suggestions so far. Also, I should note that I am running tomcat standalone; I launch it from the command line, and then attach to the running instance in Eclipse's debugger. That shouldn't make a difference, should it?

    Read the article

  • Async file uploads in Firefox reset on any DOM change

    - by Vibhu
    I'm pretty sure this is a Firefox or flash-related bug, but I just want to check if anyone has ran into this problem or knows how to fix it. Basically, we have a multi-file upload widget for our highly dynamic web app (think Gmail). We've tried both uploadify for jQuery, and YUI uploader. We've also tried taking those out of our app interface and putting them in an iFrame. What happens is that in the event of any DOM manipulation, even if the uploader is in an iFrame, be it a tab change (in our web app) that covers the iframe temporarily, or a block, etc., the uploader will stop its current upload. In the case of YUI uploader, it fires the "contentReady" event again. This ONLY happens in Firefox. IE and Chrome are fine. In case you are wondering, we really don't have any custom needs here. Just need to have multi-upload file support, and we need to give people free reign to tab around in our interface while an upload is in progress. It seems like Yahoo! and Gmail have both solved this problem. How? What are we doing wrong?

    Read the article

  • WCF: How can I send data while gracefully closing a connection?

    - by mafutrct
    I've got a WCF service that offers a Login method. A client is required to call this method (due to it being the only IsInitiating=true). This method should return a string that describes the success of the call in any case. If the login failed, the connection should be closed. The issue is with the timing of the close. I'd like to send the return value, then immediately close the connection. string Login (string name, string pass) { if (name != pass) { OperationContext.Current.Channel.Close (); return "fail"; } else { return "yay"; } } The MSDN states that calling Close on the channel causes an ICommunicationObject to gracefully transition from the Opened state to the Closed state. The Close method allows any unfinished work to be completed before returning. For example, finish sending any buffered messages). This did not work for me (or my understanding is wrong), as the close is executed immediately - WCF does not wait for the Login method to finish executing and return a string but closes the connection earlier. Therefore I assume that calling Close does not wait for the running method to finish. Now, how can I still return a value, then close?

    Read the article

  • Actionscript 3.0 - drag and throw with easing

    - by Joe Hamilton
    I'm creating a map in flash and I would like to have a smooth movement similar to this: http://www.conceptm.nl/ I have made a start but I'm having trouble taking it to the next stage. My code currently throws the movieclip after the mouse is release but there is no easing while the mouse button is down. Any tips on how I would achieve this? Here is my current code: // Vars var previousPostionX:Number; var previousPostionY:Number; var throwSpeedX:Number; var throwSpeedY:Number; var isItDown:Boolean; // Event Listeners addEventListener(MouseEvent.MOUSE_DOWN, clicked); addEventListener(MouseEvent.MOUSE_UP, released); // Event Handlers function clicked(theEvent:Event) { isItDown =true; addEventListener(Event.ENTER_FRAME, updateView); } function released(theEvent:Event) { isItDown =false; } function updateView(theEvent:Event) { if (isItDown==true){ throwSpeedX = mouseX - previousPostionX; throwSpeedY = mouseY - previousPostionY; mcTestMovieClip.x = mouseX; mcTestMovieClip.y = mouseY; } else{ mcTestMovieClip.x += throwSpeedX; mcTestMovieClip.y += throwSpeedY; throwSpeedX *=0.9; throwSpeedY *=0.9; } previousPostionX= mcTestMovieClip.x; previousPostionY= mcTestMovieClip.y; }

    Read the article

  • Product Catalog Schema design

    - by FlySwat
    I'm building a proof of concept schema for a product catalog to possibly replace a very aging and crufty one we use. In our business, we sell both physical materials and services (one time and reoccurring charges). The current catalog schema has each distinct category broken out into individual tables, while this is nicely normalized and performs well, it is fairly difficult to extend. Adding a new attribute to a particular product involves changing the table schema and backpopulating old data. An idea I've been toying with has been something along the line of a base set of entity tables in 3rd normal form, these will contain the facts that are common among ALL products. Then, I'd like to build an Attribute-Entity-Value schema that allows each entity type to be extended in a flexible way using just data and no schema changes. Finally, I'd like to denormalize this data model into materialized views for each individual entity type. This views are what the application would access. We also have many tables that contain business rules and compatibility rules. These would join against the base entity tables instead of the views. My big concerns here are: Performance - Attribute-Entity-Value schemas are flexible, but typically perform poorly, should I be concerned? More Performance - Denormalizing using materialized views may have some risks, I'm not positive on this yet. Complexity - While this schema is flexible and maintainable using just data, I worry that the complexity of the design might make future schema changes difficult. For those who have designed product catalogs for large scale enterprises, am I going down the totally wrong path? Is there any good best practice schema design reading available for product catalogs?

    Read the article

  • Download, pause and resume an download using Indy components

    - by Salvador
    Actually i'm using the TIdHTTP component for download a file from internet. i'm wondering if is possible pause and resume the download using this component o maybe another indy component. this is my current code, this works ok for download a file (without resume), but . now i want pause the download close my app ,and when my app restart then resume the download from the last position saved. var Http: TIdHTTP; MS : TMemoryStream; begin Result:= True; Http := TIdHTTP.Create(nil); MS := TMemoryStream.Create; try try Http.OnWork:= HttpWork;//this event give me the actual progress of the download process Http.Head(Url); FSize := Http.Response.ContentLength; AddLog('Downloading File '+GetURLFilename(Url)+' - '+FormatFloat('#,',FSize)+' Bytes'); Http.Get(Url, MS); MS.SaveToFile(LocalFile); except on E : Exception do Begin Result:=False; AddLog(E.Message); end; end; finally Http.Free; MS.Free; end; end;

    Read the article

  • Does my API design violate RESTful principles?

    - by peta
    Hello everybody, I'm currently (I try to) designing a RESTful API for a social network. But I'm not sure if my current approach does still accord to the RESTful principles. I'd be glad if some brighter heads could give me some tips. Suppose the following URI represents the name field of a user account: people/{UserID}/profile/fields/name But there are almost hundred possible fields. So I want the client to create its own field views or use predefined ones. Let's suppose that the following URI represents a predefined field view that includes the fields "name", "age", "gender": utils/views/field-views/myFieldView And because field views are kind of higher logic I don't want to mix support for field views into the "people/{UserID}/profile/fields" resource. Instead I want to do the following: utils/views/field-views/myFieldView/{UserID} Though Leonard Richardson & Sam Ruby state in their book "RESTful Web Services" that a RESTful design is somehow like an "extreme object oriented" approach, I think that my approach is object oriented and therefore accords to RESTful principles. Or am I wrong? When not: Are such "object oriented" approaches generally encouraged when used with care and in order to avoid query-based REST-RPC hybrids? Thanks for your feedback in advance, peta

    Read the article

  • ADO.NET Batch Insert with over 2000 parameters

    - by Liming
    Hello all, I'm using Enterprise library, but the idea is the same. I have a SqlStringCommand and the sql is constructed using StringBuilder in the forms of "insert into table (column1, column2, column3) values (@param1-X, @param2-X, @parm3-X)"+" " where "X" represents a "for loop" about 700 rows StringBuilder sb = new StringBuilder(); for(int i=0; i<700; i++) { sb.Append("insert into table (column1, column2, column3) values (@param1-"+i+", @param2-"+i, +",@parm3-"+i+") " ); } followed by constructing a command object injecting all the parameters w/ values into it. Essentially, 700 rows with 3 parameters, I ended up with 2100 parameters for this "one sql" Statement. It ran fine for about a few days and suddenly I got this error =============================================================== A severe error occurred on the current command. The results, if any, should be discarded. at System.Data.SqlClient.SqlConnection.OnError(SqlException exception, Boolean breakConnection) at System.Data.SqlClient.SqlInternalConnection.OnError(SqlException exception, Boolean breakConnection) at System.Data.SqlClient.TdsParser.ThrowExceptionAndWarning(TdsParserStateObject stateObj) at System.Data.SqlClient.TdsParser.Run(RunBehavior runBehavior, SqlCommand cmdHandler, SqlDataReader dataStream, BulkCopySimpleResultSet bulkCopyHandler, TdsParserStateObject stateObj) at System.Data.SqlClient.SqlCommand.FinishExecuteReader(SqlDataReader ds, RunBehavior runBehavior, String resetOptionsString) at System.Data.SqlClient.SqlCommand.RunExecuteReaderTds(CommandBehavior cmdBehavior, RunBehavior runBehavior, Boolean returnStream, Boolean async) at System.Data.SqlClient.SqlCommand.RunExecuteReader(CommandBehavior cmdBehavior, RunBehavior runBehavior, Boolean returnStream, String method, DbAsyncResult result) at System.Data.SqlClient.SqlCommand.InternalExecuteNon Any pointers are greatly appreciated.

    Read the article

  • Unable to connect to UNC share with WindowsIdentity.Impersonate, but works fine using LogonUser

    - by Rob
    Hopefully I'm not missing something obvious here, but I have a class that needs to create some directories on a UNC share and then move files to the new directory. When we connect using LogonUser things work fine with no errors, but when we try and use the user indicated by Integrated Windows authentication we run into problems. Here's some working and non-working code to give you an idea what is going on. The following works and logs the requested information: [DllImport("advapi32.dll", SetLastError = true)] private static extern bool LogonUser(string lpszUsername, string lpszDomain, string lpszPassword, int dwLogonType, int dwLogonProvider, out IntPtr phToken); [DllImport("kernel32.dll", CharSet = CharSet.Auto)] private static extern bool CloseHandle(IntPtr handle); IntPtr token; WindowsIdentity wi; if (LogonUser("user", "network", "password", 8, // LOGON32_LOGON_NETWORK_CLEARTEXT 0, // LOGON32_PROVIDER_DEFAULT out token)) { wi = new WindowsIdentity(token); WindowsImpersonationContext wic = wi.Impersonate(); Logging.LogMessage(System.Security.Principal.WindowsIdentity.GetCurrent().Name); Logging.LogMessage(path); DirectoryInfo info = new DirectoryInfo(path); Logging.LogMessage(info.Exists.ToString()); Logging.LogMessage(info.Name); Logging.LogMessage("LastAccessTime:" + info.LastAccessTime.ToString()); Logging.LogMessage("LastWriteTime:" + info.LastWriteTime.ToString()); wic.Undo(); CloseHandle(token); } The following fails and gives an error message indicating the network name is not available, but the correct user name is indicated by GetCurrent().Name: WindowsIdentity identity = (WindowsIdentity)HttpContext.Current.User.Identity; using (identity.Impersonate()) { Logging.LogMessage(System.Security.Principal.WindowsIdentity.GetCurrent().Name); Logging.LogMessage(path); DirectoryInfo info = new DirectoryInfo(path); Logging.LogMessage(info.Exists.ToString()); Logging.LogMessage(info.Name); Logging.LogMessage("LastAccessTime:" + info.LastAccessTime.ToString()); Logging.LogMessage("LastWriteTime:" + info.LastWriteTime.ToString()); }

    Read the article

  • jquery ui autocomplete with database

    - by user301766
    I fairly new to JQuery and perhaps trying to achieve something that might be abit harder for a beginner. However I am trying to create an autocomplete that sends the current value to a PHP script and then returns the necessary values. Here is my Javascript code $("#login_name").autocomplete({ source: function(request, response) { $.ajax({ url: "http://www.myhost.com/myscript.php", dataType: "jsonp", success: function(data) { alert(data); response($.map(data, function(item) { return { label: item.user_login_name, value: item.user_id } })) } }) }, minLength: 2 }); And here is the the last half of "myscript.php" while($row = $Database->fetch(MYSQLI_ASSOC)) { foreach($row as $column=>$val) { $results[$i][$column] = $val; } $i++; } print json_encode($results); Which produces the following output [{"user_id":"2","user_login_name":"Name1"},{"user_id":"3","user_login_name":"Name2"},{"user_id":"4","user_login_name":"Name3"},{"user_id":"5","user_login_name":"Name4"},{"user_id":"6","user_login_name":"Name5"},{"user_id":"7","user_login_name":"Name6"}] Can anyone tell me where I am going wrong please? Starting to get quite frustrated. The input box just turns "white" and no options are shown. The code does work if I specify an array of values.

    Read the article

  • Taking the data mapper approach in Zend Framework

    - by Seeker
    Let's assume the following tables setup for a Zend Framework app. user (id) groups (id) groups_users (id, user_id, group_id, join_date) I took the Data Mapper approach to models which basically gives me: Model_User, Model_UsersMapper, Model_DbTable_Users Model_Group, Model_GroupsMapper, Model_DbTable_Groups Model_GroupUser, Model_GroupsUsersMapper, Model_DbTable_GroupsUsers (for holding the relationships which can be seen as aentities; notice the "join_date" property) I'm defining the _referenceMap in Model_DbTable_GroupsUsers: protected $_referenceMap = array ( 'User' => array ( 'columns' => array('user_id'), 'refTableClass' => 'Model_DbTable_Users', 'refColumns' => array('id') ), 'App' => array ( 'columns' => array('group_id'), 'refTableClass' => 'Model_DbTable_Groups', 'refColumns' => array('id') ) ); I'm having these design problems in mind: 1) The Model_Group only mirrors the fields in the groups table. How can I return a collection of groups a user is a member of and also the date the user joined that group for every group? If I just added the property to the domain object, then I'd have to let the group mapper know about it, wouldn't I? 2) Let's say I need to fetch the groups a user belongs to. Where should I put this logic? Model_UsersMapper or Model_GroupsUsersMapper? I also want to make use of the referencing map (dependent tables) mechanism and probably use findManyToManyRowset or findDependentRowset, something like: $result = $this->getDbTable()->find($userId); $row = $result->current(); $groups = $row->findManyToManyRowset( 'Model_DbTable_Groups', 'Model_DbTable_GroupsUsers' ); This would produce two queries when I could have just written it in a single query. I will place this in the Model_GroupsUsersMapper class. An enhancement would be to add a getGroups method to the Model_User domain object which lazily loads the groups when needed by calling the appropriate method in the data mapper, which begs for the second question. Should I allow the domain object know about the data mapper?

    Read the article

  • How can one setup a version control system on a local network, without a server?

    - by Andrew
    Edit: Ok so I learned that I guess I need an distributed source control, however are there any UI based ones, and do they allow you to merge with other users on the network? This is kind of a two part question, so here it goes. I want to start developing a web application at home (with multiple developers). However, I don't have a dedicated server nor want to pay for on. So first, I don't know which version control system to use for this case, as at work we mostly have TFS setup, so I am not to familiar with whats out there. What are the best free CVS/SVN tools out there? Second, is it possible to somehow setup the CVS/SVN where there is no dedicated server and both clients store up to one week of the source code from the last check-in? Also, it would be helpful if it could integrate with visual studio, again this isn't that important at all. Problem: There are Five users, one is a Server. Server Connected: All Ok Server Disconnected: No one can share. What I am looking for: No Server: Users still have versioning based on version id of last check-in. Users must check all version on network to make sure they aren't outdated based on their last version id. If not check-in, otherwise merge/get latest. If they are update checkin, and set current version id +1.

    Read the article

  • How to sort the file names in bash in this circumstance?

    - by Nicolas
    I have run a program to generate some results with the different parameters(i.e. the R, C and RP). These results are saved in files named results.txt. Then, I should parse these experimental results to make an analysis. In the params_R_7_C_16_RP_0, the 7 is the value of the parameter R, the 16 is the value of the parameter C and the 0 is the value of the parameter RP. Now, I want to get these results.txt files in the current directory to parse, and sort the path with the parameter values of R,C and RP. I first use the following command to get the results.txt files that I want to parse: find ./ -name "results.txt" and the output is: ./params_R_11_C_9_RP_0/results.txt ./params_R_7_C_9_RP_0/results.txt ./params_R_7_C_4_RP_0/results.txt ./params_R_11_C_16_RP_0/results.txt ./params_R_9_C_4_RP_0/results.txt ./params_R_5_C_9_RP_0/results.txt ./params_R_9_C_25_RP_0/results.txt ./params_R_7_C_16_RP_0/results.txt ./params_R_5_C_25_RP_0/results.txt ./params_R_5_C_16_RP_0/results.txt ./params_R_11_C_4_RP_0/results.txt ./params_R_9_C_16_RP_0/results.txt ./params_R_7_C_25_RP_0/results.txt ./params_R_15_C_4_RP_0/results.txt ./params_R_5_C_4_RP_0/results.txt ./params_R_9_C_9_RP_0/results.txt and I change the command as follows: find ./ -name "results.txt" | sort and the output is: ./params_R_11_C_16_RP_0/results.txt ./params_R_11_C_25_RP_0/results.txt ./params_R_11_C_4_RP_0/results.txt ./params_R_11_C_9_RP_0/results.txt ./params_R_5_C_16_RP_0/results.txt ./params_R_5_C_25_RP_0/results.txt ./params_R_5_C_4_RP_0/results.txt ./params_R_5_C_9_RP_0/results.txt ./params_R_7_C_16_RP_0/results.txt ./params_R_7_C_25_RP_0/results.txt ./params_R_7_C_4_RP_0/results.txt ./params_R_7_C_9_RP_0/results.txt ./params_R_9_C_16_RP_0/results.txt ./params_R_9_C_25_RP_0/results.txt ./params_R_9_C_4_RP_0/results.txt ./params_R_9_C_9_RP_0/results.txt But I want it output as following: ./params_R_5_C_4_RP_0/results.txt ./params_R_5_C_9_RP_0/results.txt ./params_R_5_C_16_RP_0/results.txt ./params_R_5_C_25_RP_0/results.txt ./params_R_7_C_4_RP_0/results.txt ./params_R_7_C_9_RP_0/results.txt ./params_R_7_C_16_RP_0/results.txt ./params_R_7_C_25_RP_0/results.txt ./params_R_9_C_4_RP_0/results.txt ./params_R_9_C_9_RP_0/results.txt ./params_R_9_C_16_RP_0/results.txt ./params_R_9_C_25_RP_0/results.txt ... I should let it params_R_005_C_004_RP_0 when generating the results. But it would take much time to rerun the program to get the results. So I wonder if there is any way to use the bash command to achieve this objective.

    Read the article

  • Convert function to read from string instead of file in C

    - by Dusty
    I've been tasked with updating a function which currently reads in a configuration file from disk and populates a structure: static int LoadFromFile(FILE *Stream, ConfigStructure *cs) { int tempInt; ... if ( fscanf( Stream, "Version: %d\n",&tempInt) != 1 ) { printf("Unable to read version number\n"); return 0; } cs->Version = tempInt; ... } to one which allows us to bypass writing the configuration to disk and instead pass it directly in memory, roughly equivalent to this: static int LoadFromString(char *Stream, ConfigStructure *cs) A few things to note: The current LoadFromFile function is incredibly dense and complex, reading dozens of versions of the config file in a backward compatible manner, which makes duplication of the overall logic quite a pain. The functions that generate the config file and those that read it originate in totally different parts of the old system and therefore don't share any data structures so I can't pass those directly. I could potentially write a wrapper, but again, it would need to handle any structure passed in in a backwards compatible manner. I'm tempted to just pass the file as is in as a string (as in the prototype above) and convert all the fscanf's to sscanf's but then I have to handle incrementing the pointer along (and potentially dealing with buffer overrun errors) manually. This has to remain in C, so no C++ functionality like streams can help here Am I missing a better option? Is there some way to create a FILE * that actually just points to a location in memory instead of on disk? Any pointers, suggestions or other help is greatly appreciated.

    Read the article

  • how to queue function behind ajax request

    - by user1052335
    What I'm asking is conceptual rather than why my code doesn't work. I'm trying to make a web app that displays content fetched from a database in a sequence, one at a time, and then waits for a response from the user before continuing to the next one. I'm working with javascript on the client side with the JQuery library and php on the server side. In theory, there's time while waiting for the user's input to fetch information from the server using an AJAX request and have it ready for the use by the time he clicks the button, but there's also a chance that such an AJAX request hasn't completed when he clicks the button. So I need something like pseudocode: display current information fetch next data point from the server in the background onUserInput { if ( ajax request complete) { present the information fetched in this request } else if (ajax request not complete) { wait for ajax request complete present information to user } My question is this: how does one implement this " else if (ajax request not complete) { wait for ajax request complete " part. I'm currently using JQuery for my AJAX needs. I'm somewhat new to working with AJAX, and I did search around, but I didn't find anything that seemed on point. I don't know what tools I should use for this. Some kind of queue maybe? I don't need to be spoon fed. I just need to know how this is done, using what tools or if my desired outcome would be accomplished in some other way entirely. Thanks.

    Read the article

  • WPF HiercharchicalDataTemplate.DataType: How to react on interfaces?

    - by David Schmitt
    Problem I've got a collection of IThings and I'd like to create a HierarchicalDataTemplate for a TreeView. The straightforward DataType={x:Type local:IThing} of course doesn't work, probably because the WPF creators didn't want to handle the possible ambiguities. Since this should handle IThings from different sources at the same time, referencing the implementing class is out of question. Current solution For now I'm using a ViewModel which proxies IThing through a concrete implementation: public interface IThing { string SomeString { get; } ObservableCollection<IThing> SomeThings { get; } // many more stuff } public class IThingViewModel { public IThing Thing { get; } public IThingViewModel(IThing it) { this.Thing = it; } } <!-- is never applied --> <HierarchicalDataTemplate DataType="{x:Type local:IThing}"> <!-- is applied, but looks strange --> <HierarchicalDataTemplate DataType="{x:Type local:IThingViewModel}" ItemsSource="{Binding Thing.SomeThings}"> <TextBox Text="{Binding Thing.SomeString}"/> </HierarchicalDataTemplate> Question Is there a better (i.e. no proxy) way?

    Read the article

  • C# CreateElement method - how to add an child element with xmlns=""

    - by NealWalters
    How can I get the following code to add the element with "xmlns=''"? using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Xml; namespace ConsoleApplication1 { class Program { static void Main(string[] args) { string strXML = "<myroot>" + " <group3 xmlns='myGroup3SerializerStyle'>" + " <firstname xmlns=''>Neal3</firstname>" + " </group3>" + "</myroot>"; XmlDocument xmlDoc = new XmlDocument(); xmlDoc.LoadXml(strXML); XmlElement elem = xmlDoc.CreateElement(null, "lastname", null); elem.InnerText = "New-Value"; string strXPath = "/myroot/*[local-name()='group3' and namespace-uri()='myGroup3SerializerStyle']/firstname"; XmlNode insertPoint = xmlDoc.SelectSingleNode(strXPath); insertPoint.AppendChild(elem); string resultOuter = xmlDoc.OuterXml; Console.WriteLine("\n resultOuter=" + resultOuter); Console.ReadLine(); } } } My current output: resultOuter=<myroot><group3 xmlns="myGroup3SerializerStyle"><firstname xmlns="" >Neal3<lastname>New-Value</lastname></firstname></group3></myroot> The desired output: resultOuter=<myroot><group3 xmlns="myGroup3SerializerStyle"><firstname xmlns="" >Neal3<lastname xmlns="">New-Value</lastname></firstname></group3></myroot> For background, see related posts: http://www.stylusstudio.com/ssdn/default.asp?fid=23 (today) http://stackoverflow.com/questions/2410620/net-xmlserializer-to-element-formdefaultunqualified-xml (March 9, thought I fixed it, but bit me again today!)

    Read the article

  • Flush separate Castle ActiveRecord Transaction, and refresh object in another Transaction

    - by eanticev
    I've got all of my ASP.NET requests wrapped in a Session and a Transaction that gets commited only at the very end of the request. At some point during execution of the request, I would like to insert an object and make it visible to other potential threads - i.e. split the insertion into a new transaction, commit that transaction, and move on. The reason is that the request in question hits an API that then chain hits another one of my pages (near-synchronously) to let me know that it processed, and thus double submits a transaction record, because the original request had not yet finished, and thus not committed the transaction record. So I've tried wrapping the insertion code with a new SessionScope, TransactionScope(TransactionMode.New), combination of both, flushing everything manually, etc. However, when I call Refresh on the object I'm still getting the old object state. Here's some code sample for what I'm seeing: Post outsidePost = Post.Find(id); // status of this post is Status.Old using (TransactionScope transaction = new TransactionScope(TransactionMode.New)) { Post p = Post.Find(id); p.Status = Status.New; // new status set here p.Update(); SessionScope.Current.Flush(); transaction.Flush(); transaction.VoteCommit(); } outsidePost.Refresh(); // refresh doesn't get the new status, status is still Status.Old Any suggestions, ideas, and comments are appreciated!

    Read the article

  • https not redirecting to mongrel upstream

    - by kip
    Normal http is working fine for me with nginx and mongrel, however when i attempt to use https I am directed to the "welcome to nginx page". http { # passenger_root /opt/passenger-2.2.11; # passenger_ruby /usr/bin/ruby1.8; include mime.types; default_type application/octet-stream; #log_format main '$remote_addr - $remote_user [$time_local] "$request" ' # '$status $body_bytes_sent "$http_referer" ' # '"$http_user_agent" "$http_x_forwarded_for"'; #access_log logs/access.log main; sendfile on; #tcp_nopush on; #keepalive_timeout 0; keepalive_timeout 65; upstream mongrel { server 00.000.000.000:8000; server 00.000.000.000:8001; } server { listen 443; server_name domain.com; ssl on; ssl_certificate /etc/ssl/localcerts/domain_combined.crt; ssl_certificate_key /etc/ssl/localcerts/www.domain.com.key; # ssl_session_timeout 5m; # ssl_protocols SSLv2 SSLv3 TLSv1; # ssl_ciphers ALL:!ADH:!EXPORT56:RC4+RSA:+HIGH:+MEDIUM:+LOW:+SSLv2:+EXP; # ssl_prefer_server_ciphers on; location / { root /current/public/; index index.html index.htm; proxy_set_header X_FORWARDED_PROTO https; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_redirect off; proxy_pass http://mongrel; } } }

    Read the article

  • How do I change the color settings in emacs23 running in a terminal emulator?

    - by Anonymous
    I use xterm and set its appearance in ~/.Xdefaults: XTerm*background: paleTurquoise XTerm*foreground: black I also use emacs, but set its appearance differently in ~/.emacs: (set-background-color "black") (set-foreground-color "yellow") I usually run emacs within the terminal emulator with emacs -nw, rather than creating a separate X window. For some reason, this doesn't work properly for emacs23; instead, emacs retains the pale turquoise background of my xterm window. Looking at what's new in emacs23, I noted that: ** When running in a new enough xterm (newer than version 242), Emacs asks xterm what the background color is and it sets up faces accordingly for a dark background if needed (the current default is to consider the background light). So it's a feature, not a bug? Anyway, is there some way that I can I tell emacs23 to ignore the xterm background settings when running in console mode, and use the settings in ~/.emacs instead? I'll also note that: It works fine in emacs23 running in a separate X window (without the -nw option). It worked fine in emacs22; and I'm not really sure whether I need to use emacs23... Running M-x set-background-color within emacs23 -nw has no effect. It's not just xterm: the same problem exists with $TERM=cygwin, for example.

    Read the article

< Previous Page | 565 566 567 568 569 570 571 572 573 574 575 576  | Next Page >