Search Results

Search found 25547 results on 1022 pages for 'table locking'.

Page 955/1022 | < Previous Page | 951 952 953 954 955 956 957 958 959 960 961 962  | Next Page >

  • Creating a function to grab data from an Oracle database (array by ID)

    - by Nick
    I'm trying to create a function that will simply allow me to pass an SQL statement into it, and it will generate an array based on a unique ID I pass it: function oracleGetGata($query, $id="id") { global $conn; $sql = OCI_Parse($conn, $query); OCI_Execute($sql); OCI_Fetch_All($sql, $results, null, null, OCI_FETCHSTATEMENT_BY_ROW); return $results; }   For example I'd like this query $array = oracleGetData('select * from table') to return something like: [1] => Array ( [Title] => Title 1 [Description] => Description 1 ) [2] => Array ( [Title] => Title 2 [Description] => Description 2 ) [3] => Array ( [Title] => Title 3 [Description] => Description 3 )   Rather than what it's returning at the moment: [0] => Array ( [ID] => 3 [TITLE] => Title 3 [DESCRIPTION] => Description 3 ) [1] => Array ( [ID] => 1 [TITLE] => Title 1 [DESCRIPTION] => Description 1 ) [2] => Array ( [ID] => 2 [TITLE] => Title 2 [DESCRIPTION] => Description 2 )   I'd really appreciate any help with this, as the function would save me lots of time! Thank you.

    Read the article

  • Seperating and counting CSV entries from database (Access/ASp Classic)

    - by Katherine Perotta
    hey i could really use some help with this one. I have a faq with multiple "tags" and I would like to separate and count them. They are currently in the database as follows: ID-----------------TITLE--------------CONTENT-----------TAGS Sample Records: 1---------------sampletitle 1---------amplecontent--------tag1,tag2,tag3 2---------------sampletitle 2---------moresamplestuff-----tag3,tag4,tag5 How could I go about counting the number of times each tag is used? In the end, would it be easier to just create a separate table called TAGS, with a single tag corresponding to a single ID in FAQ? The only reason I don't prefer doing something like that is because I have so much data already it would take quite a while. However, if there's no alternative or if its easier than doing string parsing like that, im willing to do it. The goal is to display each unique tag and the number of times it is used. Would it be better to do the heavy lifting in the database or ASP? I have gotten as far as getting a list of all tags and displaying them in an array (with each tag separated). So at this point what I need to do is count each value and then remove the duplicates (while preserving the count number somewhere). This is in ASP classic using an Access database. Thanks!

    Read the article

  • NHibernate : Root collection with an root object

    - by Daniel
    Hi, I want to track a list of root objects which are not contained by any element. I want the following pseudo code to work: IList<FavoriteItem> list = session.Linq<FavoriteItem>().ToList(); list.Add(item1); list.Add(item2); list.Remove(item3); list.Remove(item4); var item5 = list.First(i => i.Name = "Foo"); item5.Name = "Bar"; session.Save(list); This should automatically insert item1 and item2, delete item3 and item3 and update item5 (i.e. I don't want to call sesssion.SaveOrUpdate() for all items separately. Is it possible to define a pseudo entity that is not associated with a table? For example I want to define the class Favorites and map 2 collection properties of it and than I want to write code like this: var favs = session.Linq<Favorites>(); favs.FavoriteColors.Add(new FavoriteColor(...)); favs.FavoriteMovies.Add(new FavoriteMovie(...)); session.SaveOrUpdate(favs); FavoriteColors and FavoriteMovies are the only properties of the Favorites class and are of type IList and IList. I do only want to persist the these two collection properties but not the Favorites class. Any help is much appreciated.

    Read the article

  • Updating a composite primary key

    - by VBCSharp
    I am struggling with the philosophical discussions about whether or not to use composite primary keys on my SQL Server database. I have always used the surrogate keys in the past and I am challenging myself by leaving my comfort zone to try something different. I have read many discussion but can't come to any kind of solution yet. The struggle I am having is when I have to update a record with the composite PK. For example, the record in questions is like this: ContactID, RoleID, EffectiveDate, TerminationDT. The PK in this case is the ContactID, RoleID, and EffectiveDate. TerminationDT can be null. If in my UI, the user changes the RoleID and then I need to update the record. Using the surrogate key I can do an Update Table Set RoleID = 1 WHERE surrogateID = Z. However, using the Composite Key way, once one of the fields in the composite key changes I have no way to reference the old record to update it without now maintaining somewhere in the UI a reference to the old values. I do not bind datasources in my UI. I open a connection, get the data and store it in a bucket, then close the connection. What are everyone's opinions? Thanks.

    Read the article

  • Save object in CoreData

    - by John
    I am using CoreData with iPhone SDK. I am making a notes app. I have a table with note objects displayed from my model. When a button is pressed I want to save the text in the textview to the object being edited. How do I do this? I've been trying several things but none seem to work. Thanks EDIT: NSManagedObjectContext *context = [fetchedResultsController managedObjectContext]; NSEntityDescription *entity = [[fetchedResultsController fetchRequest] entity]; NSManagedObject *newManagedObject = [NSEntityDescription insertNewObjectForEntityForName:[entity name] inManagedObjectContext:context]; [newManagedObject setValue:detailViewController.textView.text forKey:@"noteText"]; NSError *error; if (![context save:&error]) { /* Replace this implementation with code to handle the error appropriately. abort() causes the application to generate a crash log and terminate. You should not use this function in a shipping application, although it may be useful during development. If it is not possible to recover from the error, display an alert panel that instructs the user to quit the application by pressing the Home button. */ NSLog(@"Unresolved error %@, %@", error, [error userInfo]); abort(); } The above code saves it correctly but it saves it as a new object. I want it to be saved as the one I have selected in my tableView.

    Read the article

  • Help with interesting VS2010 and SQL2008 bug

    - by user355770
    Hey So im using Visual Studio 2010 to create a webpage. im calling some tabels from sql server 2008 here is where im confused... The code runs fine no errors. The pages works except im missing my rows in my 3rd column from the table. Everything else shows up. Ive checked to make sure the names are matching everywhere and that in sql the joins and such worked. Its just very weird that i'd be missing my 2 rows from the 3rd column. anyone have any ideas to help?? the error is in the tab called research material else if (tabTagId == "tpArlington_ProjectInformation") { repArlington_ProjectInformation.DataSource = ds; repArlington_ProjectInformation.DataBind(); } else if (tabTagId == "tpArlington_Plan") { repArlington_Plan.DataSource = ds; repArlington_Plan.DataBind(); } else if (tabTagId == "tpArlington_ResearchMaterial") { repArlington_ResearchMaterial.DataSource = ds; repArlington_ResearchMaterial.DataBind(); } else if (Session["projectAbbreviation"].ToString() == "ARLING") { tpArlington_ProjectInformation.HeaderText = "Project Information"; tpArlington_ProjectInformation.Visible = true; tpArlington_Plan.HeaderText = "Plan"; tpArlington_Plan.Visible = true; tpArlington_ResearchMaterial.HeaderText = "ResearchMaterial"; tpArlington_ResearchMaterial.Visible = true; getTabData("tpArlington_ProjectInformation"); getTabData("tpArlington_Plan"); getTabData("tpArlington_ReasearchMaterial"); } the 2 other tabs work perfect. the research material is where the problem is. the stuff in the tab doesn't come up. the text in the tab DOES come up but not the stuff from sql. the stuff in sql looks good, the ids match and everything is joined properly otherwise the other 2 tabs wouldnt work. that is what is confusing me. any suggestions or specific info you need just ask. thanks!

    Read the article

  • SQL Compact performance on device

    - by Ben M
    My SQL Compact database is very simple, with just three tables and a single index on one of the tables (the table with 200k rows; the other two have less than a hundred each). The first time the .sdf file is used by my Compact Framework application on the target Windows Mobile device, the system hangs for well over a minute while "something" is done to the database: when deployed, the DB is 17 megabytes, and after this first usage, it balloons to 24 megs. All subsequent usage is pretty fast, so I'm assuming there's some sort of initialization / index building going on during this first usage. I'd rather not subject the user to this delay, so I'm wondering what this initialization process is and whether it can be performed before deployment. For now, I've copied the "initialized" database back to my desktop for use in the setup project, but I'd really like to have a better answer / solution. I've tried "full compact / repair" in the VS Database Properties dialog, but this made no difference. Any ideas? For the record, I should add that the database is only read from by the device application -- no modifications are made by that code.

    Read the article

  • How to configure an index.htm file in IIS?

    - by salvationishere
    I am running IIS 6.0 on an XP OS using VS 2008 and SQL Server 2008 (Full install). I developed two web apps. Both of these I can run from IIS by setting them to the default website. However, now I tried adding an index.htm file. Real simple; all it has is two hyperlinks to these web apps. But now only the first web app works. The first web app is pure VS. The second web app modifies an Adventureworks database table. But now when I click the hyperlink for the second web app, it gives me the error below. However this error doesn't make sense to me cause I have the two web apps configured as two virtual directories beneath C:\inetpub\ and the index.htm file is also beneath C:\inetpub. And the default website is set to home directory C:\inetpub\ with Document index.htm on top. Also, why does the first web app work and not the second now? Server Error in '/AddFileToSQL' Application. The path '/AddFileToSQL/App_GlobalResources/' maps to a directory outside this application, which is not supported. Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code. Exception Details: System.Web.HttpException: The path '/AddFileToSQL/App_GlobalResources/' maps to a directory outside this application, which is not supported. Source Error: An unhandled exception was generated during the execution of the current web request. Information regarding the origin and location of the exception can be identified using the exception stack trace below.

    Read the article

  • an algorhithm for filtering out raw txt files

    - by Roman Luštrik
    Imagine you have a .txt file of the following structure: >>> header >>> header >>> header K L M 200 0.1 1 201 0.8 1 202 0.01 3 ... 800 0.4 2 >>> end of file 50 0.1 1 75 0.78 5 ... I would like to read all the data except lines denoted by >>> and lines below the >>> end of file line. So far I've solved this using read.table(comment.char = ">", skip = x, nrow = y) (x and y are currently fixed). This reads the data between the header and >>> end of file. However, I would like to make my function a bit more plastic regarding the number of rows. Data may have values larger than 800, and consequently more rows. I could scan or readLines the file and see which row corresponds to the >>> end of file and calculate the number of lines to be read. What approach would you use?

    Read the article

  • Dimension Reduction in Categorical Data with missing values

    - by user227290
    I have a regression model in which the dependent variable is continuous but ninety percent of the independent variables are categorical(both ordered and unordered) and around thirty percent of the records have missing values(to make matters worse they are missing randomly without any pattern, that is, more that forty five percent of the data hava at least one missing value). There is no a priori theory to choose the specification of the model so one of the key tasks is dimension reduction before running the regression. While I am aware of several methods for dimension reduction for continuous variables I am not aware of a similar statical literature for categorical data (except, perhaps, as a part of correspondence analysis which is basically a variation of principal component analysis on frequency table). Let me also add that the dataset is of moderate size 500000 observations with 200 variables. I have two questions. Is there a good statistical reference out there for dimension reduction for categorical data along with robust imputation (I think the first issue is imputation and then dimension reduction)? This is linked to implementation of above problem. I have used R extensively earlier and tend to use transcan and impute function heavily for continuous variables and use a variation of tree method to impute categorical values. I have a working knowledge of Python so if something is nice out there for this purpose then I will use it. Any implementation pointers in python or R will be of great help. Thank you.

    Read the article

  • Adding custom UITableViewCell crashes the simulator.

    - by nevva
    Im trying to build my application using a custom UITableViewCell. This is the code in my UIViewController that adds the viewCell to the table: - (UITableViewCell *)tableView:(UITableView *)tableView cellForRowAtIndexPath:(NSIndexPath *)indexPath { NSLog(@"------- Tableview --------"); static NSString *MyIdentifier = @"MyIdentifier"; MyIdentifier = @"aCellIdentifier"; MyTableCell *cell = (MyTableCell *)[tableView dequeueReusableCellWithIdentifier:MyIdentifier]; if(cell == nil) { NSArray *[[NSBundle mainBundle] loadNibNamed:@"tblCellView" owner:self options:nil]; cell = tblCell; } [cell setLabelText:[NSString stringWithFormat:@"indexpath.row: %d", indexPath.row]]; //cell = [[[UITableViewCell alloc] initWithStyle:UITableViewCellStyleSubtitle reuseIdentifier:MyIdentifier] autorelease]; return cell; } if i uncomment the line above "return cell" it returns a regular UITableViewCell without any errors, but as soon as i try to implement my custom cell it crashes with this error: ------- Tableview -------- 2010-04-23 11:17:33.163 SogetGolf[26935:40b] * Assertion failure in -[UITableView _createPreparedCellForGlobalRow:withIndexPath:], /SourceCache/UIKit_Sim/UIKit-984.38/UITableView.m:4709 2010-04-23 11:17:33.164 SogetGolf[26935:40b] * Terminating app due to uncaught exception 'NSInternalInconsistencyException', reason: 'UITableView dataSource must return a cell from tableView:cellForRowAtIndexPath:' 2010-04-23 11:17:33.165 SogetGolf[26935:40b] Stack: ( ... I have configured the .xib file as one should with the proper outlets. And the identifier of the UITableViewCell corresponds with name im trying to load from NSBundle

    Read the article

  • Guidance required: FIrst time gonna work with real high end database (size = 50GB).

    - by claws
    I got a project of designing a Database. This is going to be my first big scale project. Good thing about it is information is mostly organized & currently stored in text files. The size of this information is 50GB. There are going to be few millions of records in each Table. Its going to have around 50 tables. I need to provide a web interface for searching & browsing. I'm going to use MySQL DBMS. I've never worked with a database more than 200MB before. So, speed & performance was never a concern but I followed things like normalization & Indexes. I never used any kind of testing/benchmarking/queryOptimization/whatever because I never had to care about them. But here the purpose of creating a database is to make it quickly searchable. So, I need to consider all possible aspects in design. I was browsing archives & found: http://stackoverflow.com/questions/1981526/what-should-every-developer-know-about-databases http://stackoverflow.com/questions/621884/database-development-mistakes-made-by-app-developers I'm gonna keep the points mentioned in above answers in mind. What else should I know? What else should I keep in mind?

    Read the article

  • Statistical analysis on large data set to be published on the web

    - by dassouki
    I have a non-computer related data logger, that collects data from the field. This data is stored as text files, and I manually lump the files together and organize them. The current format is through a csv file per year per logger. Each file is around 4,000,000 lines x 7 loggers x 5 years = a lot of data. some of the data is organized as bins item_type, item_class, item_dimension_class, and other data is more unique, such as item_weight, item_color, date_collected, and so on ... Currently, I do statistical analysis on the data using a python/numpy/matplotlib program I wrote. It works fine, but the problem is, I'm the only one who can use it, since it and the data live on my computer. I'd like to publish the data on the web using a postgres db; however, I need to find or implement a statistical tool that'll take a large postgres table, and return statistical results within an adequate time frame. I'm not familiar with python for the web; however, I'm proficient with PHP on the web side, and python on the offline side. users should be allowed to create their own histograms, data analysis. For example, a user can search for all items that are blue shipped between week x and week y, while another user can search for sort the weight distribution of all items by hour for all year long. I was thinking of creating and indexing my own statistical tools, or automate the process somehow to emulate most queries. This seemed inefficient. I'm looking forward to hearing your ideas Thanks

    Read the article

  • How to control the "flow" of an ASP.NET MVC (3.0) web app that relies on Facebook membership, with Facebook C# SDK?

    - by Chad
    I want to totally remove the standard ASP.NET membership system and use Facebook only for my web app's membership. Note, this is not a Facebook canvas app question. Typically, in an ASP.NET app you have some key properties & methods to control the "flow" of an app. Notably: Request.IsAuthenticated, [Authorize] (in MVC apps), Membership.GetUser() and Roles.IsUserInRole(), among others. It looks like [FacebookAuthorize] is equivalent to [Authorize]. Also, there's some standard work I do across all controllers in my site. So I built a BaseController that overrides OnActionExecuting(FilterContext). Typically, I populate ViewData with the user's profile within this action. Would performance suffer if I made a call to fbApp.Get("me") in this action? I use the Facebook Javascript SDK to do registration, which is nice and easy. But that's all client-side, and I'm having a hard time wrapping my mind around when to use client-side facebook calls versus server-side. There will be a point when I need to grab the user's facebook uid and store it in a "profile" table along with a few other bits of data. That would probably be best handled on the return url from the registration plugin... correct? On a side note, what data is returned from fbApp.Get("me")?

    Read the article

  • Can't authenticate mobile client with node.js (using passport.js)

    - by Pazinio
    I'm trying to build some CRUD application with node.js as a back-end API (express) and web-app (backbone) and mobile client (native android) as front-ends.(I'm node.js beginner) My server solution is based on the following great tutorial 'easy-node-authentication'. In my android app I have managed to get the user Google-Token after I completed the authentication step with Google Plus SDK.(mobile to google-plus directly request). I'm trying to understand and find right and elegant way to re-use a given google-token and authenticate again my android user through Google-Plus account to ensure the mobile client holds real token, then add a new entry (id, token, email, name) to my users table DB within my node back-end. The question is: what should be my next step in case I want to keep my back-end without changes? should I send a GET request with the token as a cookie to /auth/google? maybe to /auth/google/callback? another URL? Does this make sense at all? Please note: I'm aware to the fact the mentioned above 'easy-node-auth' solution is based on sessions and cookies. having said that, i'm still trying to understand if there is a convenient way to integrate both (android and node) as it works good for my web-app and node. Thanks in advance.

    Read the article

  • My list item's child label elements disappear in IE on accordion menu opening

    - by Scott B
    I've got an app that's working pretty flawlessly in Chrome and FF, however, when I view it in IE, all is well until I click on a header element to activate it (jQuery accordion). What happens then is that I see a brief flash where the content is there, then suddenly the entire left column disappears. This column is generated by a floated label element with a class of ".left" as seen below... <ul class="menu collapsible"> <li class='expand sectionTitle'><a href='#'>General Settings</a> <ul class='acitem'> <li class="section"> <label class="left">This item if floated left with a defined width of 190px via css. This is the item that's disappearing after a brief display</label> <input class="input" value="input element here" /> <label class="description">This element has a margin-left:212px; set via css in order to be positioned to the right of the label element as if in an adjacent table cell. When I add a max-width property to this element, it disappears in IE too!</label> </li> </ul> </li> </ul> As you can see from the comments in the code above (for the two label elements) the description label disappears once I set a max-width on it (I don't have a max-width on the left label element, but it disappears nonetheless). The initial view of this UL menu is fine (note the expand class declaration which makes this part of the accordion open at startup. Its not until I click the "General Settings" to toggle it closed, then back open, that the left class elements disappear (and only in IE)

    Read the article

  • How do I make a function in SQL Server that accepts a column of data?

    - by brandon k
    I made the following function in SQL Server 2008 earlier this week that takes two parameters and uses them to select a column of "detail" records and returns them as a single varchar list of comma separated values. Now that I get to thinking about it, I would like to take this table and application-specific function and make it more generic. I am not well-versed in defining SQL functions, as this is my first. How can I change this function to accept a single "column" worth of data, so that I can use it in a more generic way? Instead of calling: SELECT ejc_concatFormDetails(formuid, categoryName) I would like to make it work like: SELECT concatColumnValues(SELECT someColumn FROM SomeTable) Here is my function definition: FUNCTION [DNet].[ejc_concatFormDetails](@formuid AS int, @category as VARCHAR(75)) RETURNS VARCHAR(1000) AS BEGIN DECLARE @returnData VARCHAR(1000) DECLARE @currentData VARCHAR(75) DECLARE dataCursor CURSOR FAST_FORWARD FOR SELECT data FROM DNet.ejc_FormDetails WHERE formuid = @formuid AND category = @category SET @returnData = '' OPEN dataCursor FETCH NEXT FROM dataCursor INTO @currentData WHILE (@@FETCH_STATUS = 0) BEGIN SET @returnData = @returnData + ', ' + @currentData FETCH NEXT FROM dataCursor INTO @currentData END CLOSE dataCursor DEALLOCATE dataCursor RETURN SUBSTRING(@returnData,3,1000) END As you can see, I am selecting the column data within my function and then looping over the results with a cursor to build my comma separated varchar. How can I alter this to accept a single parameter that is a result set and then access that result set with a cursor?

    Read the article

  • Adding extra data to a Variable

    - by DogPooOnYourShoe
    Right now, my code plucks out only one value using Mysql. So I thought I might aswell add each found result to a variable, however I dont know how to do this. This must be a very basic question, but I cant find a answer for it echo '<table border="1">'; echo "<tr><td><b>Surname</b></td><td><b>Title/Name</b></td><td><b>Numbers</b></td><td><b>Telephone</b></td><td><b>Edit</b></td><td><b>Del</b></td></tr>\n"; while ($row= mysql_fetch_array($result)) { $Surname = $row["Surname"]; $Title = $row["TitleName"]; $Email = $row["Email"]; $Telephone = $row["Telephone"]; $id = $row["id"]; $MooringNumbers = $row['Number']; $Assignedto['AssignedTo']; } $MooringQuery = "select * FROM mooring WHERE AssignedTo='$id'"; $MooringResult = mysql_query($MooringQuery) or die("Couldn't execute query"); while ($row1= mysql_fetch_array($MooringResult)) { $AssignedTo = $row1["AssignedTo"]; $MooringNumbers = $row1["Number"]; echo '<tr><td>' .$Surname.'</td><td>'.$Title.'</td><td>'.$MooringNumbers . '</td><td>'.$Telephone.'</td><td>' . '<a href="rlayCustomerUpdtForm.php?id='.$id.'">[EDIT]</a></td>'.'<td>'. '<a href="deleteCustomer.php?id='.$id.'">[x]</a></td>'. '</tr>'; }

    Read the article

  • How to optimize paging for large in memory database

    - by snakefoot
    I have an application where the entire database is implemented in memory using a stl-map for each table in the database. Each item in the stl-map is a complex object with references to other items in the other stl-maps. The application works with a large amount of data, so it uses more than 500 MByte RAM. Clients are able to contact the application and get a filtered version of the entire database. This is done by running through the entire database, and finding items relevant for the client. When the application have been running for an hour or so, then Windows 2003 SP2 starts to page out parts of the RAM for the application (Eventhough there is 16 GByte RAM on the machine). After the application have been partly paged out then a client logon takes a long time (10 mins) because it now generates a page fault for each pointer lookup in the stl-map. I can see it is possible to tell Windows to lock memory in RAM, but this is generally only recommended for device drivers, and only for "small" amounts of memory. I guess a poor mans solution could be to loop through the entire memory database, and thus tell Windows we are still interested in keeping the datamodel in RAM. I guess another poor mans solution could be to disable the pagefile completely on Windows. I guess the expensive solution would be a SQL database, and then rewrite the entire application to use a database layer. Then hopefully the database system will have implemented means to for fast access. Are there other more elegant solutions ?

    Read the article

  • nservicebus deleting subscription record after inserting it?

    - by Justin Holbrook
    I have been playing with nservicebus for a few weeks now and since everything was going well on my local machine I decided to try to set up a test environment and work on deployment. I am using the generic host that comes with nservicebus and was using the nservicebus.Integration profile when running locally, but would like to use Nservicebus.Production in the test environment. I set up a sql server 2008 database, made changes to my app.config and everything seemed to work fine. But after a few attempts, I noticed messages were not being picked up by my subscriber. I checked the subscription table and it was empty. Upon examination of the logs I noticed the following: 2010-05-06 15:07:57,416 [1] DEBUG NHibernate.Persister.Entity.AbstractEntityPers ister [(null)] <(null) - Insert 0: INSERT INTO [Subscription] (SubscriberEndpo int, MessageType) VALUES (?, ?) 2010-05-06 15:07:57,416 [1] DEBUG NHibernate.Persister.Entity.AbstractEntityPers ister [(null)] <(null) - Update 0: 2010-05-06 15:07:57,416 [1] DEBUG NHibernate.Persister.Entity.AbstractEntityPers ister [(null)] <(null) - Delete 0: DELETE FROM [Subscription] WHERE Subscriber Endpoint = ? AND MessageType = ? Why would it insert then delete my subscription right afterwards? To try to rule out a nhibernate dialect issue I tried switching my subscription storage to an oracle 10g database. It behaved exactly the same, it worked the first 2 times, then I started seeing my subscriptions being deleted right after they were inserted. Any ideas what is causing this behavior?

    Read the article

  • MD5CryptoServiceProvider ComputeHash Issues between VS 2003 and VS 2008

    - by owensoroke
    I have a database application that generates a MD5 hash and compares the hash value to a value in our DB (SQL 2K). The original application was written in Visual Studio 2003 and a deployed version has been working for years. Recently, some new machines on the .NET framework 3.5 have been having unrelated issues with our runtime. This has forced us to port our code path from Visual Studio 2003 to Visual Studio 2008. Since that time the hash produced by the code is different than the values in the database. The original call to the function posted in code is: RemoveInvalidPasswordCharactersFromHashedPassword(Text_Scrub(GenerateMD5Hash(strPSW))) I am looking for expert guidance as to whether or not the MD5 methods have changed since VS 2K3 (causing this point of failure), or where other possible problems may be originating from. I realize this may not be the best method to hash, but utimately any changes to the MD5 code would force us to change some 300 values in our DB table and would cost us a lot of time. In addition, I am trying to avoid having to redeploy all of the functioning versions of this application. I am more than happy to post other code including the RemoveInvalidPasswordCharactersFromHashedPassword function, or our Text_Scrub if it is necessary to recieve appropriate feedback. Thank you in advance for your input. Public Function GenerateMD5Hash(ByVal strInput As String) As String Dim md5Provider As MD5 ' generate bytes for the input string Dim inputData() As Byte = ASCIIEncoding.ASCII.GetBytes(strInput) ' compute MD5 hash md5Provider = New MD5CryptoServiceProvider Dim hashResult() As Byte = md5Provider.ComputeHash(inputData) Return ASCIIEncoding.ASCII.GetString(hashResult) End Function

    Read the article

  • Caching queries in Django

    - by dolma33
    In a django project I only need to cache a few queries, using, because of server limitations, a cache table instead of memcached. One of those queries looks like this: Let's say I have a Parent object, which has a lot of Child objects. I need to store the result of the simple query parent.childs.all(). I have no problem with that, and everything works as expected with some code like key = "%s_children" %(parent.name) value = cache.get(key) if value is None: cache.set(key, parent.children.all(), CACHE_TIMEOUT) value = cache.get(key) But sometimes, just sometimes, the cache.set does nothing, and, after executing cache.set, cache.get(key) keeps returning None. After some test, I've noticed that cache.set is not working when parent.children.all().count() has higher values. That means that if I'm storing inside of key (for example) 600 children objects, it works fine, but it wont work with 1200 children. So my question is: is there a limit to the data that a key could store? How can I override it? Second question: which way is "better", the above code, or the following one? key = "%s_children" %(parent.name) value = cache.get(key) if value is None: value = parent.children.all() cache.set(key, value, CACHE_TIMEOUT) The second version won't cause errors if cache.set doesn't work, so it could be a workaround to my issue, but obviously not a solution. In general, let's forget about my issue, which version would you consider "better"?

    Read the article

  • slow mysql count because of subselect

    - by frgt10
    how to make this select statement more faster? the first left join with the subselect is making it slower... mysql> SELECT COUNT(DISTINCT w1.id) AS AMOUNT FROM tblWerbemittel w1 JOIN tblVorgang v1 ON w1.object_group = v1.werbemittel_id INNER JOIN ( SELECT wmax.object_group, MAX( wmax.object_revision ) wmaxobjrev FROM tblWerbemittel wmax GROUP BY wmax.object_group ) AS wmaxselect ON w1.object_group = wmaxselect.object_group AND w1.object_revision = wmaxselect.wmaxobjrev LEFT JOIN ( SELECT vmax.object_group, MAX( vmax.object_revision ) vmaxobjrev FROM tblVorgang vmax GROUP BY vmax.object_group ) AS vmaxselect ON v1.object_group = vmaxselect.object_group AND v1.object_revision = vmaxselect.vmaxobjrev LEFT JOIN tblWerbemittel_has_tblAngebot wha ON wha.werbemittel_id = w1.object_group LEFT JOIN tblAngebot ta ON ta.id = wha.angebot_id LEFT JOIN tblLieferanten tl ON tl.id = ta.lieferant_id AND wha.zuschlag = (SELECT MAX(zuschlag) FROM tblWerbemittel_has_tblAngebot WHERE werbemittel_id = w1.object_group) WHERE w1.flags =0 AND v1.flags=0; +--------+ | AMOUNT | +--------+ | 1982 | +--------+ 1 row in set (1.30 sec) Some indexes has been already set and as EXPLAIN shows they were used. +----+--------------------+-------------------------------+--------+----------------------------------------+----------------------+---------+-----------------------------------------------+------+----------------------------------------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+--------------------+-------------------------------+--------+----------------------------------------+----------------------+---------+-----------------------------------------------+------+----------------------------------------------+ | 1 | PRIMARY | <derived2> | ALL | NULL | NULL | NULL | NULL | 2072 | | | 1 | PRIMARY | v1 | ref | werbemittel_group,werbemittel_id_index | werbemittel_group | 4 | wmaxselect.object_group | 2 | Using where | | 1 | PRIMARY | <derived3> | ALL | NULL | NULL | NULL | NULL | 3376 | | | 1 | PRIMARY | w1 | eq_ref | object_revision,or_og_index | object_revision | 8 | wmaxselect.wmaxobjrev,wmaxselect.object_group | 1 | Using where | | 1 | PRIMARY | wha | ref | PRIMARY,werbemittel_id_index | werbemittel_id_index | 4 | dpd.w1.object_group | 1 | | | 1 | PRIMARY | ta | eq_ref | PRIMARY | PRIMARY | 4 | dpd.wha.angebot_id | 1 | | | 1 | PRIMARY | tl | eq_ref | PRIMARY | PRIMARY | 4 | dpd.ta.lieferant_id | 1 | Using index | | 4 | DEPENDENT SUBQUERY | tblWerbemittel_has_tblAngebot | ref | PRIMARY,werbemittel_id_index | werbemittel_id_index | 4 | dpd.w1.object_group | 1 | | | 3 | DERIVED | vmax | index | NULL | object_revision_uq | 8 | NULL | 4668 | Using index; Using temporary; Using filesort | | 2 | DERIVED | wmax | range | NULL | or_og_index | 4 | NULL | 2168 | Using index for group-by | +----+--------------------+-------------------------------+--------+----------------------------------------+----------------------+---------+-----------------------------------------------+------+----------------------------------------------+ 10 rows in set (0.01 sec) The main problem while the statement above takes about 2 seconds seems to be the subselect where no index can be used. How to write the statement even more faster? Thanks for help. MT

    Read the article

  • I need some ideas on my algortihm for a Hit Counter

    - by stckvrflw
    My algorithm is for a hit count, I am tring to not count for the same person twice if that person came to the site twice in a time internval (For example if he comes twice in 5 minutes, I want to count it as 1 for this person) Here how my database looks like UserIp UserId Date of user came 127.0.0.1 new.user.akb 26.03.2010 10:15:44 127.0.0.1 new.user.akb 26.03.2010 10:16:44 127.0.0.1 new.user.akb 26.03.2010 10:17:44 127.0.0.1 new.user.akb 26.03.2010 10:18:44 127.0.0.1 new.user.akb 26.03.2010 10:19:44 127.0.0.1 new.user.akb 26.03.2010 10:20:44 127.0.0.1 new.user.akb 26.03.2010 10:21:44 127.0.0.1 new.user.akb 26.03.2010 10:22:44 127.0.0.1 new.user.akb 26.03.2010 10:23:44 What I need to do is get number of distinct UserIPs from the table above that occured within a time interval. For example if I set the time interval for 5 minutes, and let say that is starts at 26.03.2010 10:15:44 Then I will get 2 as the results, since 1 distinct value between 10:15 to 10:20 and , 1 distinct value from 10:20 to 10:23, For example if my interval is 3 minutes than the return result will be 3 Thanks.

    Read the article

  • Why is my long polling code for a notification system not updating in real time? PHP MYSQL

    - by tjones
    I am making a notification system similar to the red notification on facebook. It should update the number of messages sent to a user in real time. When the message MYSQL table is updated, it should instantly notify the user, but it does not. There does not seem to be an error inserting into MYSQL because on page refresh the notifications update just fine. I am essentially using code from this video tutorial: http://www.screenr.com/SNH (which updates in realtime if a data.txt file is changed, but it is not written for MYSQL like I am trying to do) Is there something wrong with the below code: **Javascript** <script type="text/javascript"> $(document).ready(function(){ var timestamp = null; function waitForMsg(){ $.ajax({ type: "GET", url: "getData.php", data: "userid=" + userid, async: true, cache: false, success: function(data){ var json = eval('(' + data + ')'); if (json['msg'] != "") { $('.notification').fadeIn().html(json['msg']); } setTimeout('waitForMsg()',30000); }, error: function(XMLHttpRequest, textStatus, errorThrown){ setTimeout('waitForMsg()',30000); } }); } waitForMsg(); </script> <body> <div class="notification"></div> **PHP*** <?php if ($_SERVER['REQUEST_METHOD'] == 'GET' ) { $userid = $_GET['userid']; include("config.php"); $sql="SELECT MAX(time) FROM notification WHERE userid='$userid'"; $result = mysql_query($sql); $row = mysql_fetch_assoc($result); $currentmodif = $row['MAX(time)']; $s="SELECT MAX(lasttimeread) FROM notificationsRead WHERE submittedby='$userid'"; $r = mysql_query($s); $rows = mysql_fetch_assoc($r); $lasttimeread = $rows['MAX(lasttimeread)']; while ($currentmodif <= $lasttimeread) { usleep(10000); clearstatcache(); $currentmodif = $row['MAX(time)']; } $response = array(); $response['msg'] = You have new messages; echo json_encode($response); } ?>

    Read the article

< Previous Page | 951 952 953 954 955 956 957 958 959 960 961 962  | Next Page >