Search Results

Search found 11674 results on 467 pages for 'adding'.

Page 359/467 | < Previous Page | 355 356 357 358 359 360 361 362 363 364 365 366  | Next Page >

  • Function returning dictionary, not seen by calling function

    - by twiga
    Hi There, I have an interesting problem, which is a function that returns a Dictionary<String,HashSet<String>>. The function converts some primitive structs to a Dictionary Class. The function is called as follows: Dictionary<String, HashSet<String>> Myset = new Dictionary<String,HashSet<String>>(); Myset = CacheConverter.MakeDictionary(_myList); Upon execution of the two lines above, Myset is non-existent to the debugger. Adding a watch results in: "The name 'Myset' does not exist in the current context" public Dictionary<String, HashSet<String>> MakeDictionary(LightWeightFetchListCollection _FetchList) { Dictionary<String, HashSet<String>> _temp = new Dictionary<String, HashSet<String>>(); // populate the Dictionary // return return _temp; } The Dictionary _temp is correctly populated by the called function and _temp contains all the expected values. The problem seems to be with the dictionary not being returned at all. Examples I could find on the web of functions returning primitive Dictionary<string,string> work as expected.

    Read the article

  • HTML5 optimalize to IE6.0

    - by marharépa
    Hello! Do you know any method to optimize this HTML Code to IE6 or 7 (or 8) without adding any HTML elements, or the IE is skipping all the HTML5 elements? <!DOCTYPE HTML> <head> <meta charset="UTF-8"> <title>title</title> <link type="text/css" rel="stylesheet" href="reset.css"> <link type="text/css" rel="stylesheet" href="style.css"> </head> <body> <header>code of header</header> <nav> code of nav </nav> <section> code of gallery </section> <article> code of article </article> <footer>code of footer</footer> </body> </html> Thank you.

    Read the article

  • Solr associations

    - by Tom
    Hi all, The last couple of days we are thinking of using Solr as our search engine of choice. Most of the features we need are out of the box or can be easily configured. There is however one feature that we absolutely need that seems to be well hidden (or missing) in Solr. I'll try to explain with an example. We have lots of documents that are actually businesses: <document> <name>Apache</name> <cat>1</cat> ... </document> <document> <name>McDonalds</name> <cat>2</cat> ... </document> In addition we have another xml file with all the categories and synonyms: <cat id=1> <name>software</name> <synonym>IT<synonym> </cat> <cat id=2> <name>fast food</name> <synonym>restaurant<synonym> </cat> We want to associate both businesses and categories so we can search using the name and/or synonyms of the category. But we do not want to merge these files at indexing time because we should update the categories (adding.remioving synonyms...) without indexing all the businesses again. Is there anything in Solr that does this kind of associations or do we need to develop some specific pieces? All feedback and suggestions are welcome. Thanks in advance, Tom

    Read the article

  • Mac CoreLocation Services does not ask for permissions

    - by Ryan Nichols
    I'm writing a Mac App that needs to use CoreLocation services. The code and location works fine, as long as I manually authenticate the service inside the security preference pane. However the framework is not automatically popping up with a permission dialog. The documentation states: Important The user has the option of denying an application’s access to the location service data. During its initial uses by an application, the Core Location framework prompts the user to confirm that using the location service is acceptable. If the user denies the request, the CLLocationManager object reports an appropriate error to its delegate during future requests. I do get an error to my delegate, and the value of +locationServicesEnabled is correct on CLLocationManager. The only part missing is the prompt to the user about permissions. This occurs on my development MPB and a friends MBP. Neither of us can figure out whats wrong. Has anyone run into this? Relevant code: _locationManager = [CLLocationManager new]; [_locationManager setDelegate:self]; [_locationManager setDesiredAccuracy:kCLLocationAccuracyKilometer]; ... [_locationManager startUpdatingLocation]; UPDATE: Answer It seems there is a problem with Sandboxing in which the CoreLocation framework is not allowed to talk to com.apple.CoreLocation.agent. I suspect this agent is responsible for prompting the user for permissions. If you add the Location Services Entitlement (com.apple.security.personal-information.location) it only gives your app the ability to use the CL framework. However you also need access to the CoreLocation agent to ask the user for permissions. You can give your app access by adding the entitlement 'com.apple.security.temporary-exception.mach-lookup.global-name' with a value of 'com.apple.CoreLocation.agent'. Users will be prompted for access automatically like you would expect. I've filed a bug to apple on this already.

    Read the article

  • latex padding / margin hell

    - by darren
    hi everyone I have been wrestling with a latex table for far too long. I need a table that has has centered headers, and body cells that contain text that may wrap around. Because of the wrap-around requirement, i'm using p{xxx} instead of l for specifying cell widths. The problem this causes is that cell contents are not left justified, so the look like spaced-out junk. To fix this problem I'm using \flushleft for each cell. This does left justify contents, but puts in a ton of white space above and below the contents of the cell. Is there a way to stop \flushleft (or \center for that matter) to stop adding copious amounts of verical whitespace? thanks \begin{landscape} \centering % using p{xxx} here to wrap long text instead of overflowing it \begin{longtable}{ | p{4cm} || p{3cm} | p{3cm} | p{3cm} | p{3cm} | p{3cm} |} \hline & % these are table headings. the \center is causing a ton of whitespace as well \begin{center} \textbf{HTC HD2} \end{center} & \begin{center} \textbf{Motorola Milestone} \end{center} & \begin{center} \textbf{Nokia N900} \end{center} & \begin{center} \textbf{RIM Blackberry Bold 9700} \end{center} & \begin{center} \textbf{Apple iPhone 3GS} \end{center} \\ \hline \hline % using flushleft here to left-justify, but again it is causing a ton of white space above and below cell contents. \begin{flushleft}OS / Platform \end{flushleft}& \begin{flushleft}Windows Mobile 6.5 \end{flushleft}& \begin{flushleft}Google Android 2.1 \end{flushleft}& \begin{flushleft}Maemo \end{flushleft}& \begin{flushleft}Blackberry OS 5.0 \end{flushleft}& \begin{flushleft}iPhone OS 3.1 \end{flushleft} \\ \hline

    Read the article

  • SQL Server Index cost

    - by yellowstar
    I have read that one of the tradeoffs for adding table indexes in SQL Server is the increased cost of insert/update/delete queries to benefit the performance of select queries. I can conceptually understand what happens in the case of an insert because SQL Server has to write entries into each index matching the new rows, but update and delete are a little more murky to me because I can't quite wrap my head around what the database engine has to do. Let's take DELETE as an example and assume I have the following schema (pardon the pseudo-SQL) TABLE Foo col1 int ,col2 int ,col3 int ,col4 int PRIMARY KEY (col1,col2) INDEX IX_1 col3 INCLUDE col4 Now, if I issue the statement DELETE FROM Foo WHERE col1=12 AND col2 > 34 I understand what the engine must do to update the table (or clustered index if you prefer). The index is set up to make it easy to find the range of rows to be removed and do so. However, at this point it also needs to update IX_1 and the query that I gave it gives no obvious efficient way for the database engine to find the rows to update. Is it forced to do a full index scan at this point? Does the engine read the rows from the clustered index first and generate a smarter internal delete against the index? It might help me to wrap my head around this if I understood better what is going on under the hood, but I guess my real question is this. I have a database that is spending a significant amount of time in delete and I'm trying to figure out what I can do about it. When I display the execution plan for the deletion, it just shows an entry for "Clustered Index Delete" on table Foo which lists in the details section the other indices that need to be updated but I don't get any indication of the relative cost of these other indices. Are they all equal in this case? Is there some way that I can estimate the impact of removing one or more of these indices without having to actually try it?

    Read the article

  • eclipse django using wrong settings.py in pythonpath

    - by user1290264
    I have pydev/django installed in eclipse, and it runs fine. However, after adding a second django project to eclipse and running the server ('http://127.0.0.1:8000') the pythonpath seems to be stuck on project2 even when I run project1. As a summary, I have two django projects: project1, project2. When I run the django server for project1 I get: Validating models... 0 errors found Django version 1.5, using settings 'project1.settings' Development server is running at 'http://127.0.0.1:8000/' Quit the server with CTRL-BREAK. The above seems to suggest that django is using the correct settings file; however, when I go to 'http://127.0.0.1:8000/' it displays the urls from project2. Also, if I go to 'http://127.0.0.1:8000/admin' the models are getting pulled from the sqlite.db file in project2 as well. I've even tried removing project2 from eclipse entirely and now at 'http://127.0.0.1:8000/admin' I get this error: Python Path: ['C:\Users\Brad\workspaces\In Progress\project2', 'C:\Users\Brad\workspaces\In Progress\project2', 'C:\Python27\DLLs', 'C:\Python27\lib', 'C:\Python27\lib\plat-win', 'C:\Python27\lib\lib-tk', 'C:\Python27', 'C:\Python27\lib\site-packages', 'C:\Windows\system32\python27.zip'] If I run the server on a different port with project1 the path seems to be fine: runserver 7000 --noreload Then 'http://127.0.0.1:7000/' uses project1's paths, but it doesn't seem like I should have to do this. Note: I have setup the run configurations as correctly as I know how. In the main tab, the project and main module both point to the correct project (project1), and the "PYTHONPATH that will be used in the run:" includes project1. Also, I have cleared my browser history, cookies, and everything that chrome would let me delete.

    Read the article

  • Jmeter is not extracting correctly the value with the reg ex extractor.

    - by Chris
    Jmeter is not extracting correctly the value with the reg ex. When I play with this reg ex (NAME="token" \s value="([^"]+?)") in reg ex coach with the following html everything work fine but when adding the reg with a reg ex extrator to the request he doesn't found the value even if it's the same html in output. <HTML>< script type="text/javascript" > function dostuff(no, applicationID) { submitAction('APPS_NAME' , 'noSelected=' + no + '&applicationID=' + applicationID); }< /script> <FORM NAME="baseForm" ACTION="" METHOD="POST"> <input type="hidden" NAME="token" value="fc95985af8aa5143a7b1d4fda6759a74" > <div id="loader" align="center"> <div> <strong style="color: #003366;">Loading...</strong> </div> <img src="images/initial-loader.gif" align="top"/> </div> <BODY ONLOAD="dostuff('69489','test');"> From the reg ex extractor reference name: token Regex: (NAME="token" \s value="([^"]+?)") template : $2$ match no.:1 Default value: wrong-token The request following my the POST of the previous code is returning : POST data: token=wrong-token in the next request in the tree viewer. But when I check a the real request in a proxy the token is there. Note : I tried the reg ex without the bracket and doesn't worked either. Do anybody have a idea whats wrong here ? Why jmeter can't find my token with the reg ex extrator ?

    Read the article

  • How can I make this method more Scalalicious

    - by Neil Chambers
    I have a function that calculates the left and right node values for some collection of treeNodes given a simple node.id, node.parentId association. It's very simple and works well enough...but, well, I am wondering if there is a more idiomatic approach. Specifically is there a way to track the left/right values without using some externally tracked value but still keep the tasty recursion. /* * A tree node */ case class TreeNode(val id:String, val parentId: String){ var left: Int = 0 var right: Int = 0 } /* * a method to compute the left/right node values */ def walktree(node: TreeNode) = { /* * increment state for the inner function */ var c = 0 /* * A method to set the increment state */ def increment = { c+=1; c } // poo /* * the tasty inner method * treeNodes is a List[TreeNode] */ def walk(node: TreeNode): Unit = { node.left = increment /* * recurse on all direct descendants */ treeNodes filter( _.parentId == node.id) foreach (walk(_)) node.right = increment } walk(node) } walktree(someRootNode) Edit - The list of nodes is taken from a database. Pulling the nodes into a proper tree would take too much time. I am pulling a flat list into memory and all I have is an association via node id's as pertains to parents and children. Adding left/right node values allows me to get a snapshop of all children (and childrens children) with a single SQL query. The calculation needs to run very quickly in order to maintain data integrity should parent-child associations change (which they do very frequently). In addition to using the awesome Scala collections I've also boosted speed by using parallel processing for some pre/post filtering on the tree nodes. I wanted to find a more idiomatic way of tracking the left/right node values. After looking at the answers listed I have settled on this synthesised version: def walktree(node: TreeNode) = { def walk(node: TreeNode, counter: Int): Int = { node.left = counter node.right = treeNodes .filter( _.parentId == node.id) .foldLeft(counter+1) { (counter, curnode) => walk(curnode, counter) + 1 } node.right } walk(node,1) }

    Read the article

  • ContextMenu not popping up on Long click

    - by primal
    Hi, The context menu is not popping up on the long click on the list items in the list view. I've extended the base adapter and used a view holder to implement the custom list with textviews and an imagebutton. adapter = new MyClickableListAdapter(this, R.layout.timeline, mObjectList); list.setAdapter(adapter); registerForContextMenu(list); Implementation of onCreateContextMenu @Override public void onCreateContextMenu(ContextMenu menu, View v, ContextMenuInfo menuInfo) { // TODO Auto-generated method stub super.onCreateContextMenu(menu, v, menuInfo); Log.d(TAG, "Entering Context Menu"); menu.setHeaderTitle("Context Menu"); menu.add(Menu.NONE, DELETE_ID, Menu.NONE, "Delete") .setIcon(R.drawable.icon); } The XML for listview is here <ListView android:id="@+id/list" android:layout_width="fill_parent" android:layout_height="wrap_content" /> I've been trying this for many days. I think its impossible to register Context-menu for a custom list view like this. Correct me if I am wrong (possibly with sample code). Now I am thinking of a adding a button to the list item and it displays a menu on clicking it. Is it possible with some other way than using Dialogs? Any help would be much appreciated..

    Read the article

  • How can I speed up Subversion checkins? (Using ANKH, latest, Visual Studio 2010)

    - by Timothy Khouri
    I've started working on a new web project with some friends... we are using the latest Subversion server (installed last week), the latest version of ANKH. My web project is a whapping 1.5 megabytes (that's with all images, css files, dll's after compiling, pdb files... etc). Checking in even super small changes (literally adding the letter "x" to a few files for testing)... takes FOREVER! (about 10 seconds - I almost killed myself). The ANKH client is measuring in BYTES PER SECOND ... BYTES? per second... I must be doing something wrong. Does anyone what config file has a joke totallyMessWithPeople=true so that I can turn that off or something? Oh, also, changing one "big" file of a super 10k gains speed up to nearly the speed of light (which is apparently 857 bytes per second). Help me obi wan kenobi, your my only hope! EDIT: As a note... my real work project that uses Visual Source Safe 2005 (I know, ouch) uploads files at about 200-500kbps from this very same computer/internet connection.

    Read the article

  • SQL Server: Clutering by timestamp; pros/cons

    - by Ian Boyd
    i have a table in SQL Server, where i want inserts to be added to the end of the table (as opposed to a clustering key that would cause them to be inserted in the middle). This means i want the table clustered by some column that will constantly increase. This could be achieved by clustering on a datetime column: CREATE TABLE Things ( ... CreatedDate datetime DEFAULT getdate(), [timestamp] timestamp, CONSTRAINT [IX_Things] UNIQUE CLUSTERED (CreatedDate) ) But i can't guaranteed that two Things won't have the same time. So my requirements can't really be achieved by a datetime column. i could add a dummy identity int column, and cluster on that: CREATE TABLE Things ( ... RowID int IDENTITY(1,1), [timestamp] timestamp, CONSTRAINT [IX_Things] UNIQUE CLUSTERED (RowID) ) But you'll notice that my table already constains a timestamp column; a column which is guaranteed to be a monotonically increasing. This is exactly the characteristic i want for a candidate cluster key. So i cluster the table on the rowversion (aka timestamp) column: CREATE TABLE Things ( ... [timestamp] timestamp, CONSTRAINT [IX_Things] UNIQUE CLUSTERED (timestamp) ) Rather than adding a dummy identity int column (RowID) to ensure an order, i use what i already have. What i'm looking for are thoughts of why this is a bad idea; and what other ideas are better. Note: Community wiki, since the answers are subjective.

    Read the article

  • How do I sort an internationalized i18n table with symfony and doctrine?

    - by Maurizio
    I would like to display a list of records from an internationalized table using sfDoctrinePager. Not all the records have been translated to all the languages supported by the application, so I had to implement a fallback mechanism for some fields (by overriding the getFoo() function in the Bar.class.php, as explained in another post here). I have different fallback list for each culture. Everything works fine until when it comes to sorting the records in alphabetical order. I'm sorting the records at the SQL (Dql) level, by adding an -orderBy('t.name') to the query: $q = Doctrine::getTable('Foo') ->createQuery('f') ->leftJoin('f.Translation t') ->orderBy('t.name') But here come the troubles: the list gets not sorted correctly, regardless of the active culture. I get rather better results when I limit the translations to the active culture, like this: ->leftJoin('f.Translation t WITH lang = ?', $request->getParameter('sf_culture'); Then the sorting is correct, as far as all the translations exist for the active culture. If a translation does not exist and I have to take the name from the fallback language, the record will be displayed at the very beginning of the list (I understand this happens because the value for the current culture is null). My question is: is there a best practice for getting internationalized fields (needing fallbacks) sorted correctly with doctrine and sfDoctrinePager? Thank you in advance.

    Read the article

  • red black tree balancing?

    - by Anirudh Kaki
    i am working to generate tango tree, where i need to check whether every sub tree in tango is balanced or not. if its not balanced i need to make it balance? I trying so hard to make entire RB-tree balance but i not getting any proper logic so can any one help me out?? here i am adding code to check how to find my tree is balanced are not but when its not balanced how can i make it balance. static boolean verifyProperty5(rbnode n) { int left = 0, right = 0; if (n != null) { bh++; left = blackHeight(n.left, 0); right = blackHeight(n.right, 0); } if (left == right) { System.out.println("black height is :: " + bh); return true; } else { System.out.println("in balance"); return false; } } public static int blackHeight(rbnode root, int len) { bh = 0; blackHeight(root, path1, len); return bh; } private static void blackHeight(rbnode root, int path1[], int len) { if (root == null) return; if (root.color == "black"){ root.black_count = root.parent.black_count+1; } else{ root.black_count = root.parent.black_count; } if ((root.left == null) && (root.right == null)) { bh = root.black_count; } blackHeight(root.left, path1, len); blackHeight(root.right, path1, len); }

    Read the article

  • Visual Studio 2008: Can't connect to known good TFS 2010 beta 2

    - by p.campbell
    A freshly installed TFS 2010 Beta 2 is at http://serverX:8080/tfs. A Windows 7 developer machine with VS 2008 Pro SP1 and the VS2008 Team Explorer (no SP). The TFS 2008 Service Pack 1 didn't work for me - "None of the products that are addressed by this software update are installed on this computer." The developer machine is able to browse the TFS site at the above URL. The Issue is around trying to add the TFS server into the Team Explorer window in Visual Studio 2008. Here's a screenshot showing the error: unable to connect to this Team Foundation Server. Possible reasons for failure include: The Team Foundation Server name, port number or protocol is incorrect. The Team Foundation Server is offline. Password is expired or incorrect. The TFS server is up and running properly. Firewall ports are open, and is accessible via the browser on the dev machine!! larger image Question: how can you connect from VS 2008 Pro to a TFS 2010 Beta 2 server? Resolution Here's how I solved this problem: installed VS 2008 Team Explorer as above. re-install VS 2008 Service Pack 1 when adding a TFS server to Team Explorer, you MUST specify the URL as such: http://[tfsserver]:[port]/[vdir]/[projectCollection] in my case above, it was http://serverX:8080/tfs/AppDev-TestProject you cannot simply add the TFS server name and have VS look for all Project Collections on the server. TFS 2010 has a new URL (by default) and VS 2008 doesn't recognize how to gather that list.

    Read the article

  • NSMutableObject from existing custom class

    - by A.S.
    Hello there. I have an existing class that has methods to deserialise from XML in my code. Now I need to create correct CoreData model from that class. It's objects will be created not only from CoreData storage but also by deserializing XML (somehow like instance->title = [[NSString stringWithUTF8String: (const char *)subNode->children->content] retain; ) without saving to CoreData, and sometimes I need to save it. What is the correct steps to modify existing class to do that except of adding CoreData framework and making my class an NSManagedObject instead of NSObject? Class sample: @interface TSTSong : NSManagedObject<NTSerializableObject> { NSString *identifier; NSString *title; float length; NSURL *previewURL; NSString *author; NSURL *coverURL; NSString *appStoreId; BOOL isPurchased; NSURL *bannerURL; NSDecimalNumber *priceValue; NSLocale *priceLocale; } P.S. I'm noob, so I'f I'm doing smth. wrong - please let me know. Sorry for my english.

    Read the article

  • Call Webservice using Javascript

    - by ajithperuva
    I am trying to call a webservice using javascript.But it shows an error like selectSingleNode() is not a method.I am trying it in mozilla firefox.Which is perfectly working in explorer when i change XMLHttpRequest to ActiveXObject.here i am adding my source code which i am tried in firefox. // Web Service functionality // Global vars var xmlDoc = null; var _serviceCallback = null; // Calls web service, web service url and parms, and callback function or null must be provided. // Callback function receives a true or false based on success of call to host function callWebService(url, callback) { _serviceCallback = callback; if(xmlDoc == null) { // xmlDoc = new XMLHttpRequest(); xmlDoc = new XMLHttpRequest(); } xmlDoc.onreadystatechange = stateChange; //callback for readystate xmlDoc.async = true; //do background processing //xmlDoc.load(url); xmlDoc.open('GET', url); xmlDoc.send(); //var doc= xmlDoc.responseXML; } // Updates readystate by callback function stateChange() { if (xmlDoc.readyState == 4) { var err = xmlDoc.parseError; var result = false; var nd; if(err.errorCode == 0) { nd = xmlDoc.selectSingleNode("//envelope/date_time"); if(nd.text != "") result = true; } // perform callback if provided if(_serviceCallback != null) _serviceCallback(result, nd == null ? "" : nd.text); } } // Callback supplied to XMLHttpRequest call function callbackTest(result, data) { obj = document.getElementById("txtOuput"); if(result) obj.value = "Success " + data; else obj.value = "Web Service Call Failed"; } Please help me...Already which kill my 8 more hours...

    Read the article

  • How to call Twiter's Streaming/Filter Feed with urllib2/httplib?

    - by Simon
    Update: I switched this back from answered as I tried the solution posed in cogent Nick's answer and switched to Google's urlfetch: logging.debug("starting urlfetch for http://%s%s" % (self.host, self.url)) result = urlfetch.fetch("http://%s%s" % (self.host, self.url), payload=self.body, method="POST", headers=self.headers, allow_truncated=True, deadline=5) logging.debug("finished urlfetch") but unfortunately finished urlfetch is never printed - I see the timeout happen in the logs (it returns 200 after 5 seconds), but execution doesn't seem tor return. Hi All- I'm attempting to play around with Twitter's Streaming (aka firehose) API with Google App Engine (I'm aware this probably isn't a great long term play as you can't keep the connection perpetually open with GAE), but so far I haven't had any luck getting my program to actually parse the results returned by Twitter. Some code: logging.debug("firing up urllib2") req = urllib2.Request(url="http://%s%s" % (self.host, self.url), data=self.body, headers=self.headers) logging.debug("called urlopen for %s %s, about to call urlopen" % (self.host, self.url)) fobj = urllib2.urlopen(req) logging.debug("called urlopen") When this executes, unfortunately, my debug output never shows the called urlopen line printed. I suspect what's happening is that Twitter keeps the connection open and urllib2 doesn't return because the server doesn't terminate the connection. Wireshark shows the request being sent properly and a response returned with results. I tried adding Connection: close to my request header, but that didn't yield a successful result. Any ideas on how to get this to work? thanks -Simon

    Read the article

  • Maven jaxb generate plugin to read xsd files from multiple directories

    - by ziggy
    If i have xsd file in the following directories src/main/resources/xsd src/main/resources/schema/common src/main/resources/schema/soap How can i instruct the maven jaxb plugin to generate jaxb classes using all schema files in the above directory? I can get it to generate the class files if i specify one of the folders but i cant get i dont know how to include all three folders. Here is how i generate the files for one folder: <plugin> <groupId>org.jvnet.jaxb2.maven2</groupId> <artifactId>maven-jaxb2-plugin</artifactId> <executions> <execution> <goals> <goal>generate</goal> </goals> </execution> </executions> <configuration> <schemaDirectory>src/main/resources/xsd</schemaDirectory> </configuration> </plugin> I tried adding multiple entries in the element but it just ignores all of them if i do that. Thanks

    Read the article

  • How to ship numpy with web2py application under myapp/modules?

    - by Newbie07
    I am having the following error while importing numpy from application/myapp/modules: Traceback (most recent call last): File "/home/mdipierro/make_web2py/web2py/gluon/restricted.py", line 212, in restricted File "D:/web2py_win/web2py/applications/myapp/controllers/default.py", line 13, in File "/home/mdipierro/make_web2py/web2py/gluon/custom_import.py", line 100, in custom_importer File "applications\myapp\modules\numpy\ __init__.py", line 137, in File "/home/mdipierro/make_web2py/web2py/gluon/custom_import.py", line 81, in custom_importer ImportError: Cannot import module 'add_newdocs' I tried adding 'application.myapp.modules.' in the 'import add_newdocs' statement of numpy\ __init.py__ and the error propagates to other subsequent imports(i.e. add_docs imports some other stuff and I get the ImportError again for these imports). So I narrowed down the problem to the "working directory" of the import statement. However, I do not wish to add 'application.myapp.modules.' in every import statement inside the package since it would be impractical and hard to edit if someone decides to rename the app later on. How do I make the import work smoothly? NOTE: It is necessary for me to put the numpy package in the app to ensure ease of deployment.

    Read the article

  • how can I speed up insertion of many rows to a table via ADO.NET?

    - by jcollum
    I have a table that has 5 columns: AcctId (int), Address1 (varchar), Address2 (varchar), Person1 (varchar), Person2 (varchar) . I'm generating random data to insert into this table via a C# console application. I've tried doing this random data insert via SQL-Server and decided it was not a good solution -- SQL is not good at random on an each-row basis. Generating the random data -- 975k rows of it -- takes a minimal amount of time. It's in a List of custom objects. I need to take this random data and update many rows in the database with the new random data. I tried updating the rows one at a time, very slow because of the repeated searching of the List object in code. So I think the best approach is to put all the randomized data into a table in the database, then update all the other tables that use this data. I.e. UPDATE t SET t.Address1=d.Address1 FROM Table1 t INNER JOIN RandomizedData d ON d.AcctId = t.Acct_ID. The database is very un-normalized so this Acct data is sprinkled all over the place. I've got no control of the normalization. So, having decided to insert all of the randomized data into a single table, I set out to create insert scripts: USE TheDatabase Insert tmp_RandomizedData SELECT 1,'4392 EIGHTH AVE','','JENNIFER CARTER','BARBARA CARTER' UNION ALL SELECT 2,'2168 MAIN ST','HNGR F','DANIEL HERNANDEZ','SUSAN MARTIN' // etc another 98 times... // FYI, this is not real data! I'm building this INSERT script in batches of 100. It's taking on average 175 ms to run each insert. Does this seem like a long time? It's going to take about 35 mins to run the whole insert. The table doesn't have a primary key or any indexes. I was planning on adding those after all the data in inserted (thinking that that would be faster). Is there a better way to do this?

    Read the article

  • Using bitwise operators on > 32 bit integers

    - by dqhendricks
    I am using bitwise operations in order to represent many access control flags within one integer. ADMIN_ACCESS = 1; EDIT_ACCOUNT_ACCESS = 2; EDIT_ORDER_ACCESS = 4; var myAccess = 3; // ie: ( ADMIN_ACCESS | EDIT_ACCOUNT_ACCESS ) if ( myAccess & EDIT_ACCOUNT_ACCESS ) { // check for correct access // allow for editing of account } Most of this is occurring on the PHP side of my project. There is one piece however where Javascript is used to join several access flags using | when saving someone's access level. This works fine to a point. I have found that once an integer (flag) gets too large ( 32bit), it no longer works correctly with bitwise operators in Javascript. For instance: alert( 4294967296 | 1 ); // equals 1, but should equal 4294967297 I am trying to find a workaround for this so that I do not have to limit my number of access control flags to 32. Each access control flag is two times the previous control flag so that each control flag will not interfere with other control flags. dec(4) = bin(100) dec(8) = bin(1000) dec(16) = bin(10000) I have noticed that when adding two of these flags together with a simple +, it seems to come out with the same answer as a bitwise or operation, but am having trouble wrapping my head around whether this is a simple substitution, or if there might be problems with doing this. Can anyone comment on the validity of this workaround? Example: (4294967296 | 262144 | 524288) == (4294967296 + 262144 + 524288)

    Read the article

  • Call WCF Service Through Javascript, AJAX, or JQuery

    - by obautista
    I created a number of standard WCF Services (Service Contract and Host (svc) are in separate assemblies). I fired up a Web Site in IIS to host the Services (i.e., address is http://services:1000/wcfservices.svc). Then in my Web Site project I added the reference. I am able to call the services normally. I am needed to call some of the services client side. Not sure if I should be looking at articles calling WCF services through AJAX, JQuery, or JSON enabled WCF Services. Can anyone provide any thoughts or experience with configuring as such? Some of the changes I made was adding the following to the Operation Contract: [OperationContract] [WebInvoke(Method = "POST", UriTemplate = "SetFoo")] void SetFoo(string Id); Then this above the implementation of the interface: [AspNetCompatibilityRequirements(RequirementsMode = AspNetCompatibilityRequirementsMode.Allowed)] Then in the service webconfig I have this (parens are angle brackets): <serviceHostingEnvironment aspNetCompatibilityEnabled="true"> <baseAddressPrefixFilters> <add prefix="http://services:1000/wcfservices.svc/"/>> </baseAddressPrefixFilters> </serviceHostingEnvironment> <serviceHostingEnvironment multipleSiteBindingsEnabled="false" /> Then in the client side I attempted this: <asp:ScriptManagerProxy ID="ScriptManagerProxy1" runat="server"> <compositeScript> <Scripts> <asp:ScriptReference Path="http://Flixsit:1000/FlixsitWebServices.svc" /> </Scripts> </CompositeScript> </asp:ScriptManagerProxy> I am attempting to call the service like this in javascript: wcfservices.SetFoo(string Id); Nothing is working. If it is idea or a better solution to call JSON enable, JQuery, etc....I am willing to make any changes. Thanks for any suggestions/tips provided....

    Read the article

  • Triangle numbers problem....show within 4 seconds

    - by Daredevil
    The sequence of triangle numbers is generated by adding the natural numbers. So the 7th triangle number would be 1 + 2 + 3 + 4 + 5 + 6 + 7 = 28. The first ten terms would be: 1, 3, 6, 10, 15, 21, 28, 36, 45, 55, ... Let us list the factors of the first seven triangle numbers: 1: 1 3: 1,3 6: 1,2,3,6 10: 1,2,5,10 15: 1,3,5,15 21: 1,3,7,21 28: 1,2,4,7,14,28 We can see that 28 is the first triangle number to have over five divisors. Given an integer n, display the first triangle number having at least n divisors. Sample Input: 5 Output 28 Input Constraints: 1<=n<=320 I was obviously able to do this question, but I used a naive algorithm: Get n. Find triangle numbers and check their number of factors using the mod operator. But the challenge was to show the output within 4 seconds of input. On high inputs like 190 and above it took almost 15-16 seconds. Then I tried to put the triangle numbers and their number of factors in a 2d array first and then get the input from the user and search the array. But somehow I couldn't do it: I got a lot of processor faults. Please try doing it with this method and paste the code. Or if there are any better ways, please tell me.

    Read the article

  • SQL Concurrent test update question

    - by ptoinson
    Howdy Folks, I have a SQLServer 2008 database in which I have a table for Tags. A tag is just an id and a name. The definition of the tags table looks like: CREATE TABLE [dbo].[Tag]( [ID] [int] IDENTITY(1,1) NOT NULL, [Name] [varchar](255) NOT NULL CONSTRAINT [PK_Tag] PRIMARY KEY CLUSTERED ( [ID] ASC )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ) Name is also a unique index. further I have several processes adding data to this table at a pretty rapid rate. These processes use a stored proc that looks like: ALTER PROC [dbo].[lg_Tag_Insert] @Name varchar(255) AS DECLARE @ID int SET @ID = (select ID from Tag where Name=@Name ) if @ID is null begin INSERT Tag(Name) VALUES (@Name) RETURN SCOPE_IDENTITY() end else begin return @ID end My issues is that, other than being a novice at concurrent database design, there seems to be a race condition that is causing me to occasionally get an error that I'm trying to enter duplicate keys (Name) into the DB. The error is: Cannot insert duplicate key row in object 'dbo.Tag' with unique index 'IX_Tag_Name'. This makes sense, I'm just not sure how to fix this. If it where code I would know how to lock the right areas. SQLServer is quite a different beast. First question is what is the proper way to code this 'check, then update pattern'? It seems I need to get an exclusive lock on the row during the check, rather than a shared lock, but it's not clear to me the best way to do that. Any help in the right direction will be greatly appreciated. Thanks in advance.

    Read the article

< Previous Page | 355 356 357 358 359 360 361 362 363 364 365 366  | Next Page >