Search Results

Search found 10691 results on 428 pages for 'batch insert'.

Page 368/428 | < Previous Page | 364 365 366 367 368 369 370 371 372 373 374 375  | Next Page >

  • Clustered index - multi-part vs single-part index and effects of inserts/deletes

    - by Anssssss
    This question is about what happens with the reorganizing of data in a clustered index when an insert is done. I assume that it should be more expensive to do inserts on a table which has a clustered index than one that does not because reorganizing the data in a clustered index involves changing the physical layout of the data on the disk. I'm not sure how to phrase my question except through an example I came across at work. Assume there is a table (Junk) and there are two queries that are done on the table, the first query searches by Name and the second query searches by Name and Something. As I'm working on the database I discovered that the table has been created with two indexes, one to support each query, like so: --drop table Junk1 CREATE TABLE Junk1 ( Name char(5), Something char(5), WhoCares int ) CREATE CLUSTERED INDEX IX_Name ON Junk1 ( Name ) CREATE NONCLUSTERED INDEX IX_Name_Something ON Junk1 ( Name, Something ) Now when I looked at the two indexes, it seems that IX_Name is redundant since IX_Name_Something can be used by any query that desires to search by Name. So I would eliminate IX_Name and make IX_Name_Something the clustered index instead: --drop table Junk2 CREATE TABLE Junk2 ( Name char(5), Something char(5), WhoCares int ) CREATE CLUSTERED INDEX IX_Name_Something ON Junk2 ( Name, Something ) Someone suggested that the first indexing scheme should be kept since it would result in more efficient inserts/deletes (assume that there is no need to worry about updates for Name and Something). Would that make sense? I think the second indexing method would be better since it means one less index needs to be maintained. I would appreciate any insight into this specific example or directing me to more info on maintenance of clustered indexes.

    Read the article

  • Name for method that takes a string value and returns DBNull.Value || string

    - by David Murdoch
    I got tired of writing the following code: /* Commenting out irrelevant parts public string MiddleName; public void Save(){ SqlCommand = new SqlCommand(); // blah blah...boring INSERT statement with params etc go here. */ if(MiddleName==null){ myCmd.Parameters.Add("@MiddleName", DBNull.Value); } else{ myCmd.Parameters.Add("@MiddleName", MiddleName); } /* // more boring code to save to DB. }*/ So, I wrote this: public static object DBNullValueorStringIfNotNull(string value) { object o; if (value == null) { o = DBNull.Value; } else { o = value; } return o; } // which would be called like: myCmd.Parameters.Add("@MiddleName", DBNullValueorStringIfNotNull(MiddleName)); If this is a good way to go about doing this then what would you suggest as the method name? DBNullValueorStringIfNotNull is a bit verbose and confusing. I'm also open to ways to alleviate this problem entirely. I'd LOVE to do this: myCmd.Parameters.Add("@MiddleName", MiddleName==null ? DBNull.Value : MiddleName); but that won't work. I've got C# 3.5 and SQL Server 2005 at my disposal if it matters.

    Read the article

  • PHP error problem.

    - by TaG
    I get the following error on line 8: Undefined index: real_name which is $privacy_policy = mysqli_real_escape_string($mysqli, $_POST['privacy_policy']); I was wondering how can I fix this problem? Here is the PHP. if (isset($_POST['submitted'])) { $mysqli = mysqli_connect("localhost", "root", "", "sitename"); $dbc = mysqli_query($mysqli,"SELECT users.* FROM users WHERE user_id=3"); $privacy_policy = mysqli_real_escape_string($mysqli, $_POST['privacy_policy']); if (mysqli_num_rows($dbc) == 0) { $mysqli = mysqli_connect("localhost", "root", "", "sitename"); $dbc = mysqli_query($mysqli,"INSERT INTO users (user_id, privacy_policy) VALUES ('$user_id', '$privacy_policy')"); } if ($dbc == TRUE) { $dbc = mysqli_query($mysqli,"UPDATE users SET privacy_policy = '$privacy_policy' WHERE user_id = '$user_id'"); echo '<p class="changes-saved">Your changes have been saved!</p>'; } if (!$dbc) { print mysqli_error($mysqli); return; } } Here is the HTML. <form method="post" action="index.php"> <fieldset> <ul> <li><input type="checkbox" name="privacy_policy" id="privacy_policy" value="yes" <?php if (isset($_POST['privacy_policy'])) { echo 'checked="checked"'; } else if($privacy_policy == "yes") { echo 'checked="checked"'; } ?> /></li> <li><input type="submit" name="submit" value="Save Changes" class="save-button" /> <input type="hidden" name="submitted" value="true" /> <input type="submit" name="submit" value="Preview Changes" class="preview-changes-button" /></li> </ul> </fieldset> </form>

    Read the article

  • MySQL Normalization stored procedure performance

    - by srkiNZ84
    Hi, I've written a stored procedure in MySQL to take values currently in a table and to "Normalize" them. This means that for each value passed to the stored procedure, it checks whether the value is already in the table. If it is, then it stores the id of that row in a variable. If the value is not in the table, it stores the newly inserted value's id. The stored procedure then takes the id's and inserts them into a table which is equivalent to the original de-normailized table, but this table is fully normalized and consists of mainly foreign keys. My problem with this design is that the stored procedure takes approximately 10ms or so to return, which is too long when you're trying to work through some 10million records. My suspicion is that the performance is to do with the way in which I'm doing the inserts. i.e. INSERT INTO TableA (first_value) VALUES (argument_from_sp) ON DUPLICATE KEY UPDATE id=LAST_INSERT_ID(id); SET @TableAId = LAST_INSERT_ID(); The "ON DUPLICATE KEY UPDATE" is a bit of a hack, due to the fact that on a duplicate key I don't want to update anything but rather just return the id value of the row. If you miss this step though, the LAST_INSERT_ID() function returns the wrong value when you're trying to run the "SET ..." statement. Does anyone know of a better way to do this in MySQL? Thank you

    Read the article

  • Synchronizing Access to a member of the ASP.NET session

    - by Sam
    I'm building a Javascript application and eash user has an individual UserSession. The application makes a bunch of Ajax calls. Each Ajax call needs access to a single UserSession object for the user. Each Ajax call needs a UserSession object. Data in the UserSession object is unique to each user. Originally, during each Ajax call I would create a new UserSession object and it's data members were stored in the ASP.NET Session. However, I found that the UserSession object was being instantiated a lot. To minimize the construction of the UserSession object, I wrapped it in a Singleton pattern and sychronized access to it. I believe that the synchronization is happening application wide, however I only need it to happen per user. I saw a post here that says the ASP.NET cache is synchronized, however the time between creating the object and inserting it into the cache another Thread could start construction it's another object and insert it into the cache. Here is the way I'm currently synchronizing access to the object. Is there a better way than using "lock"... should be be locking on the HttpContext.Session object? private static object SessionLock = new object(); public static WebSession GetSession { get { lock (SessionLock) { try { var context = HttpContext.Current; WebSession result = null; if (context.Session["MySession"] == null) { result = new WebSession(context); context.Session["MySession"] = result; } else { result = (WebSession)context.Session["MySession"]; } return result; } catch (Exception ex) { ex.Handle(); return null; } } } }

    Read the article

  • User control always crashes Visual Studio

    - by NickAldwin
    I'm trying to open a user control in one of our projects. It was created, I believe, in VS 2003, and the project has been converted to VS2008. I can view the code fine, but when I try to load the designer view, VS stops responding and I have to close it with the task manager. I have tried leaving it running for several minutes, but it does not do anything. I ran "devenv /log" but didn't see anything unusual in the log. I can't find a specific error message anywhere. Any idea what the problem might be? Is there a lightweight editing mode I might be able to use or something? The reason I need to have a look at the visual representation of this control is to decide where to insert some new components. I've tried googling it and searching SO, but either I don't know what to search or there is nothing out there about this. Any help is appreciated. (The strangest thing is that the user control seems to load fine in another project which references, but VS crashes as soon as I even so much as click on it in that project.)

    Read the article

  • Jquery click bindings are not working correctly when binding multiple copies

    - by KallDrexx
    I seem to have an issue when creating copies of a template and tying the .click() method to them properly. Take the following javascript for example: var list; // Loop through all of the objects var topics = data.objects; for (x = 0; x < objects.length; x++) { // Clone the object list item template var item = $("#object_item_list_template").clone(); // Setup the click action and inner text for the link tag in the template var objectVal = objects[x].Value; item.find('a').click(function () { ShowObject(objectVal.valueOf(), 'T'); }).html(objects[x].Text); // add the html to the list if (list == undefined) list = item; else list.append(item.contents()); } // Prepend the topics to the topic list $("#object_list").empty().append(list.contents()); The problem I am seeing with this is that no matter which item the user clicks on in the #object_list, ShowObject() is called with the last value of objectVal. So for example, if the 3rd item's <a> is clicked, ShowObject(5,'T'); is called even though objects[2].Value is successfully being seen as 2. How can I get this to work? The main purpose of this code is to take a variable number of items gotten from a JSON AJAX request, make copies of the item template, and insert those copies into the correct spot on the html page. I decided to do it this way so that I can keep all my HTML in one spot for when I need to change the layout or design of the page, and not have to hunt for the html code in the javascript.

    Read the article

  • Meaning of Execute_priv on mysql.db table

    - by Ben Reisner
    I created user 'restriceduser' on my mysql server that is 'locked down'. The mysql.user table has a N for all priveledges for that account. The mysql.db table has Y for only Select, Insert, Update, Delete, Create, Drop; all other privileges are N for that account. I tried to create a stored procedure and then grant him access to run only that procedure, no others, but it does not work. The user receives: Error: execute command denied to user 'restricteduser'@'%' for routine 'mydb.functionname' The stored procedure: CREATE DEFINER = 'restriceduser'@'%' FUNCTION `functionname`(sIn MEDIUMTEXT, sformat MEDIUMTEXT) RETURNS int(11) NOT DETERMINISTIC CONTAINS SQL SQL SECURITY DEFINER COMMENT '' BEGIN .... END; The grant statement I tried: GRANT EXECUTE ON PROCEDURE mydb.functionname TO 'restricteduser'@'%'; I was able to work around by modifying his mysql.db entry with update mysql.db set execute_priv='Y' where user='restricteduser' This seems to be more then I want, because it opens up permissions for him to run any stored procedure in that database, while I only wanted him to have permissions to run the designated function. Does anyone see where my issue may lie?

    Read the article

  • Python creating a dictionary and swapping these into another file

    - by satsurae
    Hi all, I have two tab delimited .csv file. From one.csv I have created a dictionary which looks like: 'EB2430': ' "\t"idnD "\t"yjgV "\t"b4267 "\n', 'EB3128': ' "\t"yagE "\t\t"b0268 "\n', 'EB3945': ' "\t"maeB "\t"ypfF "\t"b2463 "\n', 'EB3944': ' "\t"eutS "\t"ypfE "\t"b2462 "\n', I would like to insert the value of the dictionary into the second.csv file which looks like: "EB2430" 36.81 364 222 4 72 430 101 461 1.00E-063 237 "EB3128" 26.04 169 108 6 42 206 17 172 6.00E-006 45.8 "EB3945" 20.6 233 162 6 106 333 33 247 6.00E-005 42.4 "EB3944" 19.07 367 284 6 1 355 1 366 2.00E-023 103 With a resultant output tab delimited: 'EB2430' idnD yjgV b4267 36.81 364 222 4 72 430 101 461 1.00E-063 237 'EB3128' yagE b0268 26.04 169 108 6 42 206 17 172 6.00E-006 45.8 'EB3945' maeB ypfF b2463 20.6 233 162 6 106 333 33 247 6.00E-005 42.4 'EB3944' eutS ypfE b2462 19.07 367 284 6 1 355 1 366 2.00E-023 103 Here is my code for creating the dictionary: f = open ("one.csv", "r") g = open ("second.csv", "r") eb = [] desc = [] di = {} for line in f: for row in f: eb.append(row[1:7]) desc.append(row[7:]) di = dict(zip(eb,desc)) Sorry for it being so long-winded!! I've not been programming for long. Cheers! Sat

    Read the article

  • Python unittest (using SQLAlchemy) does not write/update database?

    - by Jerry
    Hi, I am puzzled at why my Python unittest runs perfectly fine without actually updating the database. I can even see the SQL statements from SQLAlchemy and step through the newly created user object's email -- ...INFO sqlalchemy.engine.base.Engine.0x...954c INSERT INTO users (user_id, user_name, email, ...) VALUES (%(user_id)s, %(user_name)s, %(email)s, ...) ...INFO sqlalchemy.engine.base.Engine.0x...954c {'user_id': u'4cfdafe3f46544e1b4ad0c7fccdbe24a', 'email': u'[email protected]', ...} > .../tests/unit_tests/test_signup.py(127)test_signup_success() -> user = user_q.filter_by(user_name='test').first() (Pdb) n ...INFO sqlalchemy.engine.base.Engine.0x...954c SELECT users.user_id AS users_user_id, ... FROM users WHERE users.user_name = %(user_name_1)s LIMIT 1 OFFSET 0 ...INFO sqlalchemy.engine.base.Engine.0x...954c {'user_name_1': 'test'} > .../tests/unit_tests/test_signup.py(128)test_signup_success() -> self.assertTrue(isinstance(user, model.User)) (Pdb) user <pweb.models.User object at 0x9c95b0c> (Pdb) user.email u'[email protected]' Yet at the same time when I login to the test database, I do not see the new record there. Is it some feature from Python/unittest/SQLAlchemy/Pyramid/PostgreSQL that I'm totally unaware of? Thanks. Jerry

    Read the article

  • Google Spreadsheet API problem: memory exceeded

    - by Robbert
    Hi guys, Don't know if anyone has experience with the Google Spreadsheets API or the Zend_GData classes but it's worth a go: When I try to insert a value in a 750 row spreadsheet, it takes ages and then throws an error that my memory limit (which is 128 MB!) was exceeded. I also got this when querying all records of this spreadsheet but this I can imaging because it's quite a lot of data. But why does this happen when inserting a row? That's not too complex, is it? Here's the code I used: public function insertIntoSpreadsheet($username, $password, $spreadSheetId, $data = array()) { $service = Zend_Gdata_Spreadsheets::AUTH_SERVICE_NAME; $client = Zend_Gdata_ClientLogin::getHttpClient($username, $password, $service); $client->setConfig(array( 'timeout' => 240 )); $service = new Zend_Gdata_Spreadsheets($client); if (count($data) == 0) { die("No valid data"); } try { $newEntry = $service->insertRow($data, $spreadSheetId); return true; } catch (Exception $e) { return false; } }

    Read the article

  • Spring MVC + Hibernate encoding problem

    - by Bar
    I work on Spring MVC + Hibernate application, use MySQL (ver. 5.0.51a) with the InnoDB engine. The problem appears when I am sending a form with cyrillic characters. As the result, database contains senseless chars in unknown encoding. All the JSP pages, database (+ tables and fields) created using UTF-8. Hibernate config also contains property which sets encoding to UTF-8. I had solved this by creating filter which encodes request content with UTF-8. Exemplary code: … encoding = "UTF-8"; request.setCharacterEncoding(encoding); chain.doFilter(request, response); … But it visibly slows down the app. The interesting thing is that executing insert query directly from the app (i.e. running from Eclipse as Java Application) works perfect. Any suggestions are welcome. TIA, Michael.

    Read the article

  • HTML5 Database Transactions

    - by jiewmeng
    i am wondering abt the example W3C Offline Web Apps the example function renderNotes() { db.transaction(function(tx) { tx.executeSql('CREATE TABLE IF NOT EXISTS Notes(title TEXT, body TEXT)', []); tx.executeSql(‘SELECT * FROM Notes’, [], function(tx, rs) { for(var i = 0; i < rs.rows.length; i++) { renderNote(rs.rows[i]); } }); }); } has the create table before the 'main' executeSql(). will it be better if i do something like $(function() { // create table 1st db.transaction(function(tx) { tx.executeSql('CREATE TABLE IF NOT EXISTS Notes(title TEXT, body TEXT)', []); }); // when i execute say to select/modify data, i just do the actual action db.transaction(function(tx) { tx.executeSql(‘SELECT * FROM Notes’, [], function(tx, rs) { ... } }); db.transaction(function(tx) { tx.executeSql(‘INSERT ...’, [], function(tx, rs) { ... } }); }) i was thinking i don't need to keep repeating the CREATE IF NOT EXISTS right?

    Read the article

  • ibatis throwing NullPointerException

    - by Prashant P
    i am trying to test ibatis with DB. I get NullPointerException. Below are the class and ibatis bean config, <select id="getByWorkplaceId" parameterClass="java.lang.Integer" resultMap="result"> select * from WorkDetails where workplaceCode=#workplaceCode# </select> <select id="getWorkplace" resultClass="com.ibatis.text.WorkDetails"> select * from WorkDetails </select> POJO public class WorkplaceDetail implements Serializable { private static final long serialVersionUID = -6760386803958725272L; private int code; private String plant; private String compRegNum; private String numOfEmps; private String typeIndst; private String typeProd; private String note1; private String note2; private String note3; private String note4; private String note5; } DAOimplementation public class WorkplaceDetailImpl implements WorkplaceDetailsDAO { private SqlMapClient sqlMapClient; public void setSqlMapClient(SqlMapClient sqlMapClient) { this.sqlMapClient = sqlMapClient; } @Override public WorkplaceDetail getWorkplaceDetail(int code) { WorkplaceDetail workplaceDetail=new WorkplaceDetail(); try{ **workplaceDetail= (WorkplaceDetail) this.sqlMapClient.queryForObject("workplaceDetail.getByWorkplaceId", code);** }catch (SQLException sqlex){ sqlex.printStackTrace(); } return workplaceDetail; } TestCode public class TestDAO { public static void main(String args[]) throws Exception{ WorkplaceDetail wd = new WorkplaceDetail(126, "Hoonkee", "1234", "22", "Service", "Tele", "hsgd","hsgd","hsgd","hsgd","hsgd"); WorkplaceDetailImpl impl= new WorkplaceDetailImpl(); **impl.getWorkplaceDetail(wd.getCode());** impl.saveOrUpdateWorkplaceDetails(wd); System.out.println("dhsd"+impl); } } I want to select and insert. I have marked as ** ** as a point of exception in above code Exception in thread "main" java.lang.NullPointerException at com.ibatis.text.WorkplaceDetailImpl.getWorkplaceDetail(WorkplaceDetailImpl.java:19) at com.ibatis.text.TestDAO.main(TestDAO.java:11)

    Read the article

  • Creating syncable Calendar in ICS

    - by user1390816
    I have a problem with creating a new Calendar in ICS. The Calendar should be synyable to the google Calendar. I try following: Uri calendarUri = CalendarContract.Calendars.CONTENT_URI; calendar.put(CalendarContract.Calendars.ACCOUNT_NAME, sync_account); calendar.put(CalendarContract.Calendars.ACCOUNT_TYPE, "com.google"); calendar.put(CalendarContract.Calendars.NAME, name); calendar.put(CalendarContract.Calendars.CALENDAR_DISPLAY_NAME, displayName); calendar.put(CalendarContract.Calendars.CALENDAR_COLOR, 0xFF008080); calendar.put(CalendarContract.Calendars.CALENDAR_ACCESS_LEVEL, CalendarContract.Calendars.CAL_ACCESS_OWNER); calendar.put(CalendarContract.Calendars.OWNER_ACCOUNT, true); calendar.put(CalendarContract.Calendars.VISIBLE, 1); calendar.put(CalendarContract.Calendars.SYNC_EVENTS, 1); calendarUri = calendarUri.buildUpon() .appendQueryParameter(CalendarContract.CALLER_IS_SYNCADAPTER, "true") .appendQueryParameter(CalendarContract.Calendars.ACCOUNT_NAME, sync_account) .appendQueryParameter(CalendarContract.Calendars.ACCOUNT_TYPE, "com.google") // CalendarContract.ACCOUNT_TYPE_LOCAL .build(); Uri result = activity.getContentResolver().insert(calendarUri, calendar); an I get always this error: 09-17 17:11:30.278: E/AndroidRuntime(13243): FATAL EXCEPTION: CalendarSyncAdapterAccountMonitor 09-17 17:11:30.278: E/AndroidRuntime(13243): java.lang.IllegalArgumentException: the name must not be empty: null 09-17 17:11:30.278: E/AndroidRuntime(13243): at android.accounts.Account.<init>(Account.java:48) 09-17 17:11:30.278: E/AndroidRuntime(13243): at com.google.android.syncadapters.calendar.CalendarSyncAdapter.onAccountsUpdated(CalendarSyncAdapter.java:1129) 09-17 17:11:30.278: E/AndroidRuntime(13243): at android.accounts.AccountManager$11.run(AccountManager.java:1279) 09-17 17:11:30.278: E/AndroidRuntime(13243): at android.os.Handler.handleCallback(Handler.java:605) 09-17 17:11:30.278: E/AndroidRuntime(13243): at android.os.Handler.dispatchMessage(Handler.java:92) 09-17 17:11:30.278: E/AndroidRuntime(13243): at android.os.Looper.loop(Looper.java:137) 09-17 17:11:30.278: E/AndroidRuntime(13243): at android.os.HandlerThread.run(HandlerThread.java:60) 09-17 17:11:30.293: E/android.os.Debug(1989): !@Dumpstate > dumpstate -k -t -n -z -d -o /data/log/dumpstate_app_error What can I do with the CalendarSyncAdapterAccountMonitor, that it is not empty? Thanks in advance.

    Read the article

  • How Do I Escape Apostrophes in Field Valued in SQL Server?

    - by Mikecancook
    I asked a question a couple days ago about creating INSERTs by running a SELECT to move data to another server. That worked great until I ran into a table that has full on HTML and apostrophes in it. What's the best way to deal with this? Lucking there aren't too many rows so it is feasible as a last resort to 'copy and paste'. But, eventually I will need to do this and the table by that time will probably be way too big to copy and paste these HTML fields. This is what I have now: select 'Insert into userwidget ([Type],[UserName],[Title],[Description],[Data],[HtmlOutput],[DisplayOrder],[RealTime],[SubDisplayOrder]) VALUES (' + ISNULL('N'''+Convert(varchar(8000),Type)+'''','NULL') + ',' + ISNULL('N'''+Convert(varchar(8000),Username)+'''','NULL') + ',' + ISNULL('N'''+Convert(varchar(8000),Title)+'''','NULL') + ',' + ISNULL('N'''+Convert(varchar(8000),Description)+'''','NULL') + ',' + ISNULL('N'''+Convert(varchar(8000),Data)+'''','NULL') + ',' + ISNULL('N'''+Convert(varchar(8000),HTMLOutput)+'''','NULL') + ',' + ISNULL('N'''+Convert(varchar(8000),DisplayOrder)+'''','NULL') + ',' + ISNULL('N'''+Convert(varchar(8000),RealTime)+'''','NULL') + ',' + ISNULL('N'''+Convert(varchar(8000),SubDisplayOrder)+'''','NULL') + ')' from userwidget Which is works fine except those pesky apostrophes in the HTMLOutput field. Can I escape them by having the query double up on the apostrophes or is there a way of encoding the field result so it won't matter?

    Read the article

  • Strange XCode debugger behavior with UITableView datasource

    - by Tarfa
    Hey guys. I've got a perplexing issue. In my subclassed UITableViewController my datasource methods lose their tableview reference depending on lines of code I put inside the method. For example, in this code block: - (NSInteger)numberOfSectionsInTableView:(UITableView *)tableView { // Return the number of sections. return 3; } - (NSInteger)tableView:(UITableView *)tableView numberOfRowsInSection:(NSInteger)section { // Return the number of rows in the section. return 5; } // Customize the appearance of table view cells. - (UITableViewCell *)tableView:(UITableView *)tableView cellForRowAtIndexPath:(NSIndexPath *)indexPath { id i = tableView; static NSString *CellIdentifier = @"Cell"; UITableViewCell *cell = [tableView dequeueReusableCellWithIdentifier:CellIdentifier]; if (cell == nil) { cell = [[[UITableViewCell alloc] initWithStyle:UITableViewCellStyleDefault reuseIdentifier:CellIdentifier] autorelease]; } // Configure the cell... return cell; } the "id i = tableView;" causes the tableview to become nil (0x0) -- and it causes it to be nil before I ever start stepping into the method. If I insert an assignment statement above the "id i = tableview;" statement: CGFloat x = 5.0; id i = tableView; then tableview retains its pointer (i.e. is not nil) if I place the breakpoint after the "id i = tableView;" line. In other words, the breakpoint must be set after the "id i = tableView"; assignment in order for tableView to retain its pointer. If the breakpoint is set before the assignment is made and I just hang at that breakpoint for a bit then after a couple of seconds the console logs this error message: Assertion failed: (cls), function getName, file /SourceCache/objc4_Sim/objc4-427.5/runtime/objc-runtime-new.mm, line 3990. Although the code works when I don't step through the method, I need my debugger to work! It makes programming kind of challenging when your debugging tools become your enemy. Anyone know what the cause and solution are? Thanks.

    Read the article

  • Python subprocess: 64 bit windows server PIPE doesn't exist :(

    - by Spaceman1861
    I have a GUI that launches selected python scripts and runs it in cmd next to the gui window. I am able to get my launcher to work on my (windows xp 32 bit) laptop but when I upload it to the server(64bit windows iss7) I am running into some issues. The script runs, to my knowledge but spits back no information into the cmd window. My script is a bit of a Frankenstein that I have hacked and slashed together to get it to work I am fairly certain that this is a very bad example of the subprocess module. Just wondering if i could get a hand :). My question is how do i have to alter my code to work on a 64bit windows server. :) from Tkinter import * import pickle,subprocess,errno,time,sys,os PIPE = subprocess.PIPE if subprocess.mswindows: from win32file import ReadFile, WriteFile from win32pipe import PeekNamedPipe import msvcrt else: import select import fcntl def recv_some(p, t=.1, e=1, tr=5, stderr=0): if tr < 1: tr = 1 x = time.time()+t y = [] r = '' pr = p.recv if stderr: pr = p.recv_err while time.time() < x or r: r = pr() if r is None: if e: raise Exception(message) else: break elif r: y.append(r) else: time.sleep(max((x-time.time())/tr, 0)) return ''.join(y) def send_all(p, data): while len(data): sent = p.send(data) if sent is None: raise Exception(message) data = buffer(data, sent) The code above isn't mine def Run(): print filebox.get(0) location = filebox.get(0) location = location.__str__().replace(listbox.get(ANCHOR).__str__(),"") theTime = time.asctime(time.localtime(time.time())) lastbox.delete(0, END) lastbox.insert(END,theTime) for line in CookieCont: if listbox.get(ANCHOR) in line and len(line) > 4: line[4] = theTime else: "Fill In the rip Details to record the time" if __name__ == '__main__': if sys.platform == 'win32' or sys.platform == 'win64': shell, commands, tail = ('cmd', ('cd "'+location+'"',listbox.get(ANCHOR).__str__()), '\r\n') else: return "Please use contact admin" a = Popen(shell, stdin=PIPE, stdout=PIPE) print recv_some(a) for cmd in commands: send_all(a, cmd + tail) print recv_some(a) send_all(a, 'exit' + tail) print recv_some(a, e=0) The Code above is mine :)

    Read the article

  • how to change onclick event with jquery?

    - by user550758
    I have create a js file in which i am creating the dynamic table and dynamically changing the click event for the calendar but onclicking the calender image for dynamic generated table, calendar popup in the previous calendar image. Please help me code /***------------------------------------------------------------ * *Developer: Vipin Sharma * *Creation Date: 20/12/2010 (dd/mm/yyyy) * *ModifiedDate ModifiedBy Comments (As and when) * *-------------------------------------------------------------*/ var jq = jQuery.noConflict(); var ia = 1; jq(document).ready(function(){ jq("#subtaskid1").click(function() { if(ia<=10){ var create_table = jq("#orgtable").clone(); create_table.find("input").each(function() { jq(this).attr({ 'id': function(_, id) { return ia + id }, 'name': function(_, name) { return ia + name }, 'value': '' }); }).end(); create_table.find("select").each(function(){ jq(this).attr({ 'name': function(_,name){ return ia + name } }); }).end(); create_table.find("textarea").each(function(){ jq(this).attr({ 'name': function(_,name){ return ia + name } }); }).end(); create_table.find("#f_trigger_c").each(function(){ var oclk = " displayCalendar(document.prjectFrm['"+ ia +"dtSubDate'],'yyyy-mm-dd', this)"; //ERROR IS HERE var newclick = new Function(oclk); jq(this).click(newclick); }).end(); create_table.appendTo("#subtbl"); jq('#maxval').val(ia); ia++; }else{ var ai = ia-1; alert('Only ' + ai + ' SubTask can be insert'); } }); });

    Read the article

  • Fastest way to modify a decimal-keyed table in MySQL?

    - by javanix
    I am dealing with a MySQL table here that is keyed in a somewhat unfortunate way. Instead of using an auto increment table as a key, it uses a column of decimals to preserve order (presumably so its not too difficult to insert new rows while preserving a primary key and order). Before I go through and redo this table to something more sane, I need to figure out how to rekey it without breaking everything. What I would like to do is something that takes a list of doubles (the current keys) and outputs a list of integers (which can be cast down to doubles for rekeying). For example, input {1.00, 2.00, 2.50, 2.60, 3.00} would give output {1, 2, 3, 4, 5). Since this is a database, I also need to be able to update the rows nicely: UPDATE table SET `key`='3.00' WHERE `key`='2.50'; Can anyone think of a speedy algorithm to do this? My current thought is to read all of the doubles into a vector, take the size of the vector, and output a new vector with values from 1 => doubleVector.size. This seems pretty slow, since you wouldn't want to read every value into the vector if, for instance, only the last n/100 elements needed to be modified. I think there is probably something I can do in place, since only values after the first non-integer double need to be modified, but I can't for the life of me figure anything out that would let me update in place as well. For instance, setting 2.60 to 3.00 the first time you see 2.50 in the original key list would result in an error, since the key value 3.00 is already used for the table.

    Read the article

  • Long running operations (threads) in a web (asp.net) environment

    - by rrejc
    I have an asp.net (mvc) web site. As the part of the functions I will have to support some long running operations, for example: Initiated from user: User can upload (xml) file to the server. On the server I need to extract file, do some manipulation (insert into the db) etc... This can take from one minute to ten minutes (or even more - depends on file size). Of course I don't want to block the request when the import is running , but I want to redirect user to some progress page where he will have a chance to watch the status, errors or even cancel the import. This operation will not be frequently used, but it may happen that two users at the same time will try to import the data. It would be nice to run the imports in parallel. At the beginning I was thinking to create a new thread in the iis (controller action) and run the import in a new thread. But I am not sure if this is a good idea (to create working threads on a web server). Should I use windows services or any other approach? Initiated from system: - I will have to periodically update lucene index with the new data. - I will have to send mass emails (in the future). Should I implement this as a job in the site and run the job via Quartz.net or should I also create a windows service or something? What are the best practices when it comes to running site "jobs"? Thanks!

    Read the article

  • Machine leaning algorithm for data classification.

    - by twk
    Hi all, I'm looking for some guidance about which techniques/algorithms I should research to solve the following problem. I've currently got an algorithm that clusters similar-sounding mp3s using acoustic fingerprinting. In each cluster, I have all the different metadata (song/artist/album) for each file. For that cluster, I'd like to pick the "best" song/artist/album metadata that matches an existing row in my database, or if there is no best match, decide to insert a new row. For a cluster, there is generally some correct metadata, but individual files have many types of problems: Artist/songs are completely misnamed, or just slightly mispelled the artist/song/album is missing, but the rest of the information is there the song is actually a live recording, but only some of the files in the cluster are labeled as such. there may be very little metadata, in some cases just the file name, which might be artist - song.mp3, or artist - album - song.mp3, or another variation A simple voting algorithm works fairly well, but I'd like to have something I can train on a large set of data that might pick up more nuances than what I've got right now. Any links to papers or similar projects would be greatly appreciated. Thanks!

    Read the article

  • Time with and without OpenMP

    - by was
    I have a question.. I tried to improve a well known program algorithm in C, FOX algorithm for matrix multiplication.. relative link without openMP: (http://web.mst.edu/~ercal/387/MPI/ppmpi_c/chap07/fox.c). The initial program had only MPI and I tried to insert openMP in the matrix multiplication method, in order to improve the time of computation: (This program runs in a cluster and computers have 2 cores, thus I created 2 threads.) The problem is that there is no difference of time, with and without openMP. I observed that using openMP sometimes, time is equivalent or greater than the time without openMP. I tried to multiply two 600x600 matrices. void Local_matrix_multiply( LOCAL_MATRIX_T* local_A /* in */, LOCAL_MATRIX_T* local_B /* in */, LOCAL_MATRIX_T* local_C /* out */) { int i, j, k; chunk = CHUNKSIZE; // 100 #pragma omp parallel shared(local_A, local_B, local_C, chunk, nthreads) private(i,j,k,tid) num_threads(2) { /* tid = omp_get_thread_num(); if(tid == 0){ nthreads = omp_get_num_threads(); printf("O Pollaplasiamos pinakwn ksekina me %d threads\n", nthreads); } printf("Thread %d use the matrix: \n", tid); */ #pragma omp for schedule(static, chunk) for (i = 0; i < Order(local_A); i++) for (j = 0; j < Order(local_A); j++) for (k = 0; k < Order(local_B); k++) Entry(local_C,i,j) = Entry(local_C,i,j) + Entry(local_A,i,k)*Entry(local_B,k,j); } //end pragma omp parallel } /* Local_matrix_multiply */

    Read the article

  • database setup for web application

    - by vbNewbie
    I have an application that requires a database and I have already setup tables but not sure if they match the requirements of the app. The app is a crawler which fetches web urls, crawls and stores appropriate urls and posts and all this is based on client requests which are stored as projects. So for each url stored there is one post and for client there are many projects and for each project there are many types of requests. So we get a client with a request and assign them a project name and then use the request to search for content and store the url and post. A request could already exist and should not be duplicated but should be associated with the right client and project and post etc. Here is my schema now: url table: urlId PK queryId FK url post table: postId PK urlId FK post date request table: queryId PK request client table: clientId PK client Name projectId FK project table: projectID PK queryID FK project Does this look right? or does anyone have suggestions. Of course my stored procedures and insert statements will have to be in depth.

    Read the article

  • handle large Parcelable ArrayList in Android

    - by Gal Ben-Haim
    I'm developing an Android app that is a client to a JSON webservice API. I have classes of resource objects (some are nested) and I pass results from an IntentService that access the webserive using the Parcelable interface for all the resource classes. the webservice returns arrays or results that can be potentially large (because of the nesting, for example, a post object also contains comments array, each comment also contains a user object). currently I'm either inserting the results into a SQlite database or displaying them in a ListView. (my relevant methods are accepting ArrayList<resourceClass> as arguments). (some data need to be persistent stored and some should not). since I don't know what size of lists I can handle this way without reaching the memory limits, is this a good practice ? is it a better idea to save the parsed JSON to a local file immediately and pass the file path to the ResultReceiver, then either insert to database from that file or display the data ? is there a better way to handle this ? btw - I'm parsing the JSON as a stream with Gson's Reader so there shouldn't be memory issues at that stage.

    Read the article

< Previous Page | 364 365 366 367 368 369 370 371 372 373 374 375  | Next Page >