Search Results

Search found 18729 results on 750 pages for 'edit'.

Page 619/750 | < Previous Page | 615 616 617 618 619 620 621 622 623 624 625 626  | Next Page >

  • Optimizing an embedded SELECT query in mySQL

    - by Crazy Serb
    Ok, here's a query that I am running right now on a table that has 45,000 records and is 65MB in size... and is just about to get bigger and bigger (so I gotta think of the future performance as well here): SELECT count(payment_id) as signup_count, sum(amount) as signup_amount FROM payments p WHERE tm_completed BETWEEN '2009-05-01' AND '2009-05-30' AND completed > 0 AND tm_completed IS NOT NULL AND member_id NOT IN (SELECT p2.member_id FROM payments p2 WHERE p2.completed=1 AND p2.tm_completed < '2009-05-01' AND p2.tm_completed IS NOT NULL GROUP BY p2.member_id) And as you might or might not imagine - it chokes the mysql server to a standstill... What it does is - it simply pulls the number of new users who signed up, have at least one "completed" payment, tm_completed is not empty (as it is only populated for completed payments), and (the embedded Select) that member has never had a "completed" payment before - meaning he's a new member (just because the system does rebills and whatnot, and this is the only way to sort of differentiate between an existing member who just got rebilled and a new member who got billed for the first time). Now, is there any possible way to optimize this query to use less resources or something, and to stop taking my mysql resources down on their knees...? Am I missing any info to clarify this any further? Let me know... EDIT: Here are the indexes already on that table: PRIMARY PRIMARY 46757 payment_id member_id INDEX 23378 member_id payer_id INDEX 11689 payer_id coupon_id INDEX 1 coupon_id tm_added INDEX 46757 tm_added, product_id tm_completed INDEX 46757 tm_completed, product_id

    Read the article

  • rails not recognizing project

    - by tipu
    I can create a new project using rails and I can use stuff like rails migration ... and i (correctly) get a error because the sqlite gem is missing. but when i try using rails migration ... with a project i checked out from github, it doesn't recognize that it is a rails project i get: Usage: rails new APP_PATH [options] Options: -d, [--database=DATABASE] # Preconfigure for selected database (options: mysql/oracle/postgresql/sqlite3/frontbase/ibm_db) # Default: sqlite3 -O, [--skip-active-record] # Skip Active Record files [--dev] # Setup the application with Gemfile pointing to your Rails checkout -J, [--skip-prototype] # Skip Prototype files -T, [--skip-test-unit] # Skip Test::Unit files -G, [--skip-git] # Skip Git ignores and keeps -b, [--builder=BUILDER] # Path to an application builder (can be a filesystem path or URL) [--edge] # Setup the application with Gemfile pointing to Rails repository -m, [--template=TEMPLATE] # Path to an application template (can be a filesystem path or URL) -r, [--ruby=PATH] # Path to the Ruby binary of your choice # Default: /usr/bin/ruby1.8 [--skip-gemfile] # Don't create a Gemfile and it goes on. any ideas? edit: it's probably an important detail that earlier my rails wasn't working at all. i had to cp /usr/bin/ruby to /usr/bin/local/ruby

    Read the article

  • Is it possible to refer to metadata of the target from within the target implementation in MSBuild?

    - by mark
    Dear ladies and sirs. My msbuild targets file contains the following section: <ItemGroup> <Targets Include="T1"> <Project>A\B.sln"</Project> <DependsOnTargets>The targets T1 depends on</DependsOnTargets> </Targets> <Targets Include="T2"> <Project>C\D.csproj"</Project> <DependsOnTargets>The targets T2 depends on</DependsOnTargets> </Targets> ... </ItemGroup> <Target Name="T1" DependsOnTargets="The targets T1 depends on"> <MSBuild Projects="A\B.sln" Properties="Configuration=$(Configuration)" /> </Target> <Target Name="T2" DependsOnTargets="The targets T2 depends on"> <MSBuild Projects="C\D.csproj" Properties="Configuration=$(Configuration)" /> </Target> As you can see, A\B.sln appears twice: As Project metadata of T1 in the ItemGroup section. In the Target statement itself passed to the MSBuild task. I am wondering whether I can remove the second instance and replace it with the reference to the Project metadata of the target, which name is given to the Target task? Exactly the same question is asked for the (Targets.DependsOnTargets) metadata. It is mentioned twice much like the %(Targets.Project) metadata. Thanks. EDIT: I should probably describe the constraints, which must be satisfied by the solution: I want to be able to build individual projects with ease. Today I can simply execute msbuild file.proj /t:T1 to build the T1 target and I wish to keep this ability. I wish to emphasize, that some projects depend on others, so the DependsOnTargets attribute is really necessary for them.

    Read the article

  • NetBeans Platform - how to refresh the property sheet view of a node?

    - by I82Much
    Hi all, I am using the PropertySheetView component to visualize and edit the properties of a node. This view should always reflect the most recent properties of the object; if there is a change to the object in another process, I want to somehow refresh the view and see the updated properties. The best way I was able to do this is something like the following (making use of EventBus library to publish and subscribe to changes in objects): public DomainObjectWrapperNode(DomainObject obj) { super (Children.LEAF, Lookups.singleton(obj)); EventBus.subscribe(DomainObject.class, this); } public void onEvent(DomainObject event) { // Do a check to determine if the updated object is the one wrapped by this node; // if so fire a property sets change firePropertySetsChange(null, this.getPropertySets()); } This works, but my place in the scrollpane is lost when the sheet refreshes; it resets the view to the top of the list and I have to scroll back down to where I was before the refresh action. So my question is, is there a better way to refresh the property sheet view of a node, specifically so my place in the property list is not lost upon refresh?

    Read the article

  • Should I use IDisposable for purely managed resources?

    - by John Gietzen
    Here is the scenario: I have an object called a Transaction that needs to make sure that only one entity has permission to edit it at any given time. In order to facilitate a long-lived lock, I have the class generating a token object that can be used to make the edits. You would use it like this: var transaction = new Transaction(); using (var tlock = transaction.Lock()) { transaction.Update(data, tlock); } Now, I want the TransactionLock class to implement IDisposable so that its usage can be clear. But, I don't have any unmanaged resources to dispose. however, the TransctionLock object itself is a sort of "unmanaged resource" in the sense that the CLR doesn't know how to properly finalize it. All of this would be fine and dandy, I would just use IDisposable and be done with it. However, my issue comes when I try to do this in the finalizer: ~TransactionLock() { this.Dispose(false); } I want the finalizer to release the transaction from the lock, if possible. How, in the finalizer, do I detect if the parent transaction (this.transaction) has already been finalized? Is there a better pattern I should be using? The Transaction class looks something like this: public sealed class Transaction { private readonly object lockMutex = new object(); private TransactionLock currentLock; public TransactionLock Lock() { lock (this.lockMutex) { if (this.currentLock != null) throw new InvalidOperationException(/* ... */); this.currentLock = new TransactionLock(this); return this.currentLock; } } public void Update(object data, TransactionLock tlock) { lock (this.lockMutex) { this.ValidateLock(tlock); // ... } } internal void ValidateLock(TransactionLock tlock) { if (this.currentLock == null) throw new InvalidOperationException(/* ... */); if (this.currentLock != tlock) throw new InvalidOperationException(/* ... */); } internal void Unlock(TransactionLock tlock) { lock (this.lockMutex) { this.ValidateLock(tlock); this.currentLock = null; } } }

    Read the article

  • Word wrap in multiline textbox after 35 characters

    - by Kanavi
    <asp:TextBox CssClass="txt" ID="TextBox1" runat="server" onkeyup="CountChars(this);" Rows="20" Columns="35" TextMode="MultiLine" Wrap="true"> </asp:TextBox> I need to implement word-wrapping in a multi-line textbox. I cannot allow users to write more then 35 chars a line. I am using the following code, which breaks at precisely the specified character on every line, cutting words in half. Can we fix this so that if there's not enough space left for a word on the current line, we move the whole word to the next line? function CountChars(ID) { var IntermediateText = ''; var FinalText = ''; var SubText = ''; var text = document.getElementById(ID.id).value; var lines = text.split("\n"); for (var i = 0; i < lines.length; i++) { IntermediateText = lines[i]; if (IntermediateText.length <= 50) { if (lines.length - 1 == i) FinalText += IntermediateText; else FinalText += IntermediateText + "\n"; } else { while (IntermediateText.length > 50) { SubText = IntermediateText.substring(0, 50); FinalText += SubText + "\n"; IntermediateText = IntermediateText.replace(SubText, ''); } if (IntermediateText != '') { if (lines.length - 1 == i) FinalText += IntermediateText; else FinalText += IntermediateText + "\n"; } } } document.getElementById(ID.id).value = FinalText; $('#' + ID.id).scrollTop($('#' + ID.id)[0].scrollHeight); } Edit - 1 I have to show total max 35 characters in line without specific word break and need to keep margin of two characters from the right. Again, the restriction should be for 35 characters but need space for total 37 (Just for the Visibility issue.)

    Read the article

  • navigate all items in a wpf tree view

    - by Brian Leahy
    I want to be able to traverse the visual ui tree looking for an element with an ID bound to the visual element's Tag property. I'm wondering how i do this. Controls don't have children to traverse. I started using LogicalTreeHelper.GetChildren, which seems to work as intended, up until i hit a TreeView control... then LogicalTreeHelper.GetChildren doesnt return any children. Note: the purpose is to find the visual UI element that corresponds to the data item. That is, given an ID of the item, Go find the UI element displaying it. Edit: I am apparently am not explaining this well enough. I am binding some data objects to a TreeView control and then wanting to select a specific item programaticly given that business object's ID. I dont see why it's so hard to travers the visual tree and find the element i want, as the data object's ID is in the Tag property of the appropriate visual element. I'm using Mole and I am able to find the UI element with the appropriate ID in it's Tag. I just cannot find the visual element in code. LogicalTreeHelper does not traverse any items in the tree. Neither does ItemContainerGenerator.ContainerFromItem retrieve anything for items in the tree view.

    Read the article

  • Associative Array / Object can't be read in functions

    - by Matrym
    At the very beginning of the javascript file, I have: var lbp = {}; lbp.defaults = { minLength: 40 }; I can successfully alert it afterwards, with: alert(lbp.defaults.minLength); But as soon as I put it inside a function, when I alert, I get "Undefined". What gives, and how do I avoid this? Is it absolutely necessary to pass this variable into each function, for example, by doing: function(lbp) { alert(lbp.defaults.minLength); } I would have thought that defining it first, it would attain global scope and not be required to be passed in? Thanks in advance for enlightening me :) ==================================== EDIT: The problem seems like it might be my initialize function is itself defined within lbp. Is there any way to use this function var, and still use lbp vars inside it? lbp.initialize = function() { alert(lbp.defaults.minLength); }; The full bit of code looks like this: <script type="text/javascript"> var lbp = { defaults: { minLength: 40 } }; lbp.initialize = function() { alert(lbp.defaults.minLength); }; window.onload = lbp.initialize; </script>

    Read the article

  • How should I configure grub for booting linux kernel from a USB hard drive?

    - by skolima
    I have a laptop hard drive in an external enclosure which I use as a large pendrive. For an added twist, I have installed Linux on it, so I can boot any machine with my distribution of choice (e.g. for data recovery or repairing a b0rked system or just using a borrowed laptop without destroying the preinstalled Windows). The problem is that, depending on the hardware configuration, the USB hard drive may be visible under different paths. For grub configuration I just use (hda0,0) as it is relative to the device the grub was launched from. I have UUID entries in /etc/fstab. I also specify rootwait in the kernel parameters so that it waits for the USB subsystem to settle down before trying to mount the device. What should I pass to the kernel as root= ? Currently boot from the pendrive once, check the debug messages to see what /dev/sdX device has been assigned to the USB drive by the kernel, then reboot and edit the grub configuration. I can't change anything on the PC besides enabling Boot from USB hard drive in BIOS and setting it to higher priority than internal hard drives. There are various initrd generating scripts which include support for UUID in root device path, unfortunately the Gentoo native one (genkernel) does not support rootwait and I had no luck trying to use others. The boot process goes like this (it is quite similar in Windows): The BIOS chooses the boot device and loads whatever is its MBR (which happens to be grub stage-1). Grub loads it's configuration and stage-2 files from device it has set as root, using (hd0) for the device it was loaded from by BIOS. Grub loads and starts a kernel (still the same numbering, so I can use (hd0,0) again ). Kernel initializes all built-in devices (rootwait does it's magic now). Kernel mounts the partition it was passed as root (this is a kernel parameter, not grub parameter). init.d starts the userland booting process, including mounting things from /etc/fstab. Part 5 is the one giving me problems.

    Read the article

  • Entity Framework and associations between string keys

    - by fredrik
    Hi, I am new to Entity Framework, and ORM's for that mather. In the project that I'm involed in we have a legacy database, with all its keys as strings, case-insensitive. We are converting to MSSQL and want to use EF as ORM, but have run in to a problem. Here is an example that illustrates our problem: TableA has a primary string key, TableB has a reference to this primary key. In LINQ we write something like: var result = from t in context.TableB select t.TableA; foreach( var r in result ) Console.WriteLine( r.someFieldInTableA ); if TableA contains a primary key that reads "A", and TableB contains two rows that references TableA but with different cases in the referenceing field, "a" and "A". In our project we want both of the rows to endup in the result, but only the one with the matching case will end up there. Using the SQL Profiler, I have noticed that both of the rows are selected. Is there a way to tell Entity Framework that the keys are case insensitive? Edit:We have now tested this with NHibernate and come to the conclution that NHibernate works with case-insensitive keys. So NHibernate might be a better choice for us.I am however still interested in finding out if there is any way to change the behaviour of Entity Framework.

    Read the article

  • How to extract ALL typedefs and structs and unions from c++ source

    - by Michael Wells
    I have inherited a Visual Studio project that contains hundreds of files. I would like to extract all the typedefs, structs and unions from each .h/.cpp file and put the results in a file). Each typdef/struct/union should be on one line in the results file. This would make sorting much easier. typdef int myType; struct myFirstStruct { char a; int b;...}; union Part_Number_Serial_Number_Part_2_Response_Message_Type {struct{Message_Response_Head_Type Head; Part_Num_Serial_Num_Part_2_Report_Array Part_2_Report; Message_Tail_Type Tail;} Data; BYTE byData[140];}myUnion; struct { bool c; int d;...}mySecondStruct; My problem is, I do not know what to look for (grammar of typedef/structs/unions) using a regular expression. I cannot believe that nobody has done this before (I googled and have not found anything on this). Does anyone know the regular expressions for these? (Note some are commented out using // others /* */) Or a tool to accomplish this. Edit: I am toying with the idea of autogenerating source code and/or dialogs for modifying messages that use the underlying typedef/struct/union. I was going to use the output to generate an XML file that could be used for this reason. The source for these are in C/C++ and used in almost all my projects. These projects are usually NOT in C/C++. By using the XML version I would only need to update/add the typedef/struct/union only in one place and all the projects would be able to autogen the source and/or dialogs.

    Read the article

  • using libcurl to check if a file exists on a SFTP site

    - by Snazzer
    I'm using C++ with libcurl to do SFTP/FTPS transfers. Before uploading a file, I need to check if the file exists without actually downloading it. If the file doesn't exist, I run into the following problems: //set up curlhandle for the public/private keys and whatever else first. curl_easy_setopt(CurlHandle, CURLOPT_URL, "sftp://user@pass:host/nonexistent-file"); curl_easy_setopt(CurlHandle, CURLOPT_NOBODY, 1); curl_easy_setopt(CurlHandle, CURLOPT_FILETIME, 1); int result = curl_easy_perform(CurlHandle); //result is CURLE_OK, not CURLE_REMOTE_FILE_NOT_FOUND //using curl_easy_getinfo to get the file time will return -1 for filetime, regardless //if the file is there or not. If I don't use CURLOPT_NOBODY, it works, I get CURLE_REMOTE_FILE_NOT_FOUND. However, if the file does exist, it gets downloaded, which wastes time for me, since I just want to know if it's there or not. Any other techniques/options I'm missing? Note that it should work for ftps as well. Edit: This error occurs with sftp. With FTPS/FTP I get CURLE_FTP_COULDNT_RETR_FILE, which I can work with.

    Read the article

  • TEXTAREAs scroll by themselves (on IE8) every time you type one character

    - by Justin Grant
    IE8 has a known bug (per connect.microsoft.com) where typing or pasting text into a TEXTAREA element will cause the textarea to scroll by itself. This is hugely annoying and shows up in many community sites, including Wikipedia. The repro is this: open the HTML below with IE8 (or use any long page on wikipedia which will exhibit the same problem until they fix it) size the browser full-screen paste a few pages of text into the TEXTAREA move the scrollbar to the middle position now type one character into the textarea Expected: nothing happens Actual: scrossing happens on its own, and the insertion point ends up near the bottom of the textarea! Below is repro HTML (can also see this live on the web here: http://en.wikipedia.org/w/index.php?title=Text_box&action=edit) <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml" > <body> <div style="width: 80%"> <textarea rows="20" cols="80" style="width:100%;" ></textarea> </div> </body> </html>

    Read the article

  • What can i use to journal writes to file system

    - by Dmitry
    Hello, all I need to track all writes to files in order to have synchronized version of files on different place (server or just other directory, not considerable). Let it: all files located in same directory feel free to create some system files (e.g. SomeFileName.Ext~temp-data) no one have concurrent access to synced directory; nobody spoil ours meta-files or change real-files before we do postponed writes (like a commits) do not to care recovering "local" changes in case of crash; system can just rolled back to state of "server" by simple copy from it significant to have it transparent to use (so programmer must just call ordinary fopen(), read(), write()) It must be guaranteed that copy of files which "server" have is consistent. That is whole files scope existed in some moment of time. They may be sufficiently outdated but it must be fair snapshot of all files at some time. As i understand i should overload writing logic to collect data in order sent changes to "server". For example writing to temporary File~tmp. And so i have to overload reads in order program could read actual data of file. It would be great if you suggest some existing library (java or c++, it is unimportant) or solution (VCS customizing?). Or give hints how should i write it by myself. edit: After some reading i have more precision requirements: I need COW (Copy-on-write) wrapper for fopen(),fwrite(),.. or interceptor (hook) WriteFile() and other FS api system calls. Log-structured file system in userspace would be a alternative too.

    Read the article

  • Modifying C# dictionary value

    - by minjang
    I'm a C++ expert, but not at all for C#. I created a Dictionary<string, STATS>, where STATS is a simple struct. Once I built the dictionary with initial string and STATS pairs, I want to modify the dictionary's STATS value. In C++, it's very clear: Dictionary<string, STATS*> benchmarks; Initialize it... STATS* stats = benchmarks[item.Key]; // Touch stats directly However, I tried like this in C#: Dictionary<string, STATS> benchmarks = new Dictionary<string, STATS>(); // Initialize benchmarks with a bunch of STATS foreach (var item in _data) benchmarks.Add(item.app_name, item); foreach (KeyValuePair<string, STATS> item in benchmarks) { // I want to modify STATS value inside of benchmarks dictionary. STATS stat_item = benchmarks[item.Key]; ParseOutputFile("foo", ref stat_item); // But, not modified in benchmarks... stat_item is just a copy. } This is a really novice problem, but wasn't easy to find an answer. EDIT: I also tried like the following: STATS stat_item = benchmarks[item.Key]; ParseOutputFile(file_name, ref stat_item); benchmarks[item.Key] = stat_item; However, I got the exception since such action invalidates Dictionary: Unhandled Exception: System.InvalidOperationException: Collection was modified; enumeration operation may not execute. at System.ThrowHelper.ThrowInvalidOperationException(ExceptionResource resource) at System.Collections.Generic.Dictionary`2.Enumerator.MoveNext() at helper.Program.Main(String[] args) in D:\dev\\helper\Program.cs:line 75

    Read the article

  • ASP:DropDownList in ItemTemplate: Why is SelectedValue attribute allowed?

    - by recursive
    This piece of code <asp:DropDownList runat="server" ID="testdropdown" SelectedValue="2"> <asp:ListItem Text="1" Value="1"></asp:ListItem> <asp:ListItem Text="2" Value="2"></asp:ListItem> <asp:ListItem Text="3" Value="3"></asp:ListItem> </asp:DropDownList> yields this error: The 'SelectedValue' property cannot be set declaratively. Yet, this is a legal and commonly used edit template for databound GridViews. The SelectedValue attribute certainly appears to be declaratively set here. <EditItemTemplate> <asp:DropDownList runat="server" ID="GenreDropDownList" DataSourceID="GenreDataSource" DataValueField="GenreId" DataTextField="Name" SelectedValue='<%# Bind("Genre.GenreId") %>'> </asp:DropDownList> </EditItemTemplate> The question is: what is the difference between the cases when you are allowed to set it declaratively and those in which you are not? The error message implies that it's never allowed.

    Read the article

  • What would be the best way to install (distribute) dynamic libraries in Mac OSX using CMake/Cpack ?

    - by YuppieNetworking
    Hello all, I have a project whose artifacts are two dynamic libraries, let's say libX.dylib and libY.dylib (or .so for linux distributions). There are no executables. Now I would like to distribute these libraries. Since I already use CMake to compile it, I looked at CPack and successfully generated .tgz and .deb packages for Linux. However, for Mac OSX I have no idea and the CPack Wiki about its generators did not help me much. I managed to generate a PackageMaker package, but as clearly stated at this packagemaker howto, there is no uninstall option when using this util. I then read a bit about Bundles, but I feel lost specially since I have no executable. Question: What is the correct way to generate a package for Mac OSX using CPack? My ideal scenario would be either something that installs as easily as a bundle or as a deb file in debian/ubuntu. Thanks for your help Edit One more detail: the code to one of these libraries is not open, so I can't expect the users to do a cmake; make; make install That's why I want a .deb, .tar.gz, bundle or whatsoever.

    Read the article

  • Problems updating a textBox ASP.NET

    - by Roger Filipe
    Hello, I'm starting in asp.net and am having some problems that I do not understand. The problem is this, I am building a site for news. Every news has a title and body. I have a page where I can insert news, this page uses a textbox for each of the fields (title and body), after clicking the submit button everything goes ok and saves the values in the database. And o have another page where I can read the news, I use labels for each of the camps, these labels are defined in the Page_Load. Now I'm having problems on the page where I can edit the news. I am loading two textboxes (title and body) in the Page_Load, so far so good, but then when I change the text and I click the submit button, it ignores the changes that I made in the text and saves the text loaded in Page_Load. This code doesn't show any database connection but you can understand what i'm talking about. protected void Page_Load(object sender, EventArgs e) { textboxTitle.Text = "This is the title of the news"; textboxBody.Text = "This is the body of the news "; } I load the page, make the changes in the text , and then click submit. protected void btnSubmit_Click(object sender, EventArgs e) { String title = textboxTitle.Text; String body = textboxBody.Text; Response.Write("Title: " + title + " || "); Response.Write("Body: " + body ); } Nothing happens, the text in the textboxes is always the one I loaded in the page_load, how do I update the Text in the textboxes?

    Read the article

  • LINQ-SQL Updating Multiple Rows in a single transaction

    - by RPM1984
    Hi guys, I need help re-factoring this legacy LINQ-SQL code which is generating around 100 update statements. I'll keep playing around with the best solution, but would appreciate some ideas/past experience with this issue. Here's my code: List<Foo> foos; int userId = 123; using (DataClassesDataContext db = new FooDatabase()) { foos = (from f in db.FooBars where f.UserId = userId select f).ToList(); foreach (FooBar fooBar in foos) { fooBar.IsFoo = false; } db.SubmitChanges() } Essentially i want to update the IsFoo field to false for all records that have a particular UserId value. Whats happening is the .ToList() is firing off a query to get all the FooBars for a particular user, then for each Foo object, its executing an UPDATE statement updating the IsFoo property. Can the above code be re-factored to one single UPDATE statement? Ideally, the only SQL i want fired is the below: UPDATE FooBars SET IsFoo = FALSE WHERE UserId = 123 EDIT Ok so looks like it cant be done without using db.ExecuteCommand. Grr...! What i'll probably end up doing is creating another extension method for the DLINQ namespace. Still require some hardcoding (ie writing "WHERE" and "UPDATE"), but at least it hides most of the implementation details away from the actual LINQ query syntax.

    Read the article

  • Synchronizing one or more databases with a master database - Foreign keys

    - by Ikke
    I'm using Google Gears to be able to use an application offline (I know Gears is deprecated). The problem I am facing is the synchronization with the database on the server. The specific problem is the primary keys or more exactly, the foreign keys. When sending the information to the server, I could easily ignore the primary keys, and generate new ones. But then how would I know what the relations are. I had one sollution in mind, bet the I would need to save all the pk for every client. What is the best way to synchronize multiple client with one server db. Edit: I've been thinking about it, and I guess seqential primary keys are not the best solution, but what other possibilities are there? Time based doesn't seem right because of collisions which could happen. A GUID comes to mind, is that an option? It looks like generating a GUID in javascript is not that easy. I can do something with natural keys or composite keys. As I'm thinking about it, that looks like the best solution. Can I expect any problems with that?

    Read the article

  • MySQL "OR MATCH" hangs (very slow) on multiple tables

    - by Kerry
    After learning how to do MySQL Full-Text search, the recommended solution for multiple tables was OR MATCH and then do the other database call. You can see that in my query below. When I do this, it just gets stuck in a "busy" state, and I can't access the MySQL database. SELECT a.`product_id`, a.`name`, a.`slug`, a.`description`, b.`list_price`, b.`price`, c.`image`, c.`swatch`, e.`name` AS industry, MATCH( a.`name`, a.`sku`, a.`description` ) AGAINST ( '%s' IN BOOLEAN MODE ) AS relevance FROM `products` AS a LEFT JOIN `website_products` AS b ON (a.`product_id` = b.`product_id`) LEFT JOIN ( SELECT `product_id`, `image`, `swatch` FROM `product_images` WHERE `sequence` = 0) AS c ON (a.`product_id` = c.`product_id`) LEFT JOIN `brands` AS d ON (a.`brand_id` = d.`brand_id`) INNER JOIN `industries` AS e ON (a.`industry_id` = e.`industry_id`) WHERE b.`website_id` = %d AND b.`status` = %d AND b.`active` = %d AND MATCH( a.`name`, a.`sku`, a.`description` ) AGAINST ( '%s' IN BOOLEAN MODE ) OR MATCH ( d.`name` ) AGAINST ( '%s' IN BOOLEAN MODE ) GROUP BY a.`product_id` ORDER BY relevance DESC LIMIT 0, 9 Any help would be greatly appreciated. EDIT All the tables involved are MyISAM, utf8_general_ci. Here's the EXPLAIN SELECT statement: id select_type table type possible_keys key key_len ref rows Extra 1 PRIMARY a ALL NULL NULL NULL NULL 16076 Using temporary; Using filesort 1 PRIMARY b ref product_id product_id 4 database.a.product_id 2 1 PRIMARY e eq_ref PRIMARY PRIMARY 4 database.a.industry_id 1 1 PRIMARY <derived2> ALL NULL NULL NULL NULL 23261 1 PRIMARY d eq_ref PRIMARY PRIMARY 4 database.a.brand_id 1 Using where 2 DERIVED product_images ALL NULL NULL NULL NULL 25933 Using where I don't know how to make that look neater -- sorry about that UPDATE it returns the query after 196 seconds (I think correctly). The query without multiple tables takes about .56 seconds (which I know is really slow, we plan on changing to solr or sphinx soon), but 196 seconds?? If we could add a number to the relevance if it was in the brand name ( d.name ), that would also work

    Read the article

  • Efficient algorithm for Next button on a MySQL result set

    - by David Grayson
    I have a website that lets people view rows in a table (each row is a picture). There are more than 100,000 rows. You can view different subsets of the rows, and you can view them with different sort orders. While you are viewing one of the rows, you can click the "Next" or "Previous" buttons to go the next/previous row in the list. How would you implement the "Next" and "Previous" features of the website? More specifically, if you have an arbitrary query that returns a list of up to 100,000+ rows, and you know some information about the current row someone is viewing, how do you determine the NEXT row efficiently? Here is the pseudo-code of the solution I came up with when the website was young, and it worked well when there were only 1000 rows, but now that there are 100,000 rows I think it is eating up too much memory. int nextRowId(string query, int currentRowId) { array allRowIds = mysql_query(query); // Takes up a lot of memory! int currentIndex = (index of currentRowId in allRowIds); // Takes time! return allRowIds[currentIndex+1]; } While you are thinking about this problem, remember that the website can store more information about the current row than just its ID (for example, the position of the current row in the result set), and this information can be used as a hint to help determine the ID of the next row. Edit: Sorry for not mentioning this earlier, but this isn't just a static website: rows can often be added to the list, and rows can be re-ordered in the list. (Much rarer, rows can be removed from the list.) I think that I should worry about that kind of thing, but maybe you can convince me otherwise.

    Read the article

  • Latex - Apply an operation to every character in a string

    - by hroest
    Hi I am using LaTeX and I have a problem concerning string manipulation. I want to have an operation applied to every character of a string, specifically I want to replace every character "x" with "\discretionary{}{}{}x". I want to do this because I have a long string (DNA) which I want to be able to separate at any point without hyphenation. Thus I would like to have a command called "myDNA" that will do this for me instead of inserting manually \discretionary{}{}{} after every character. Is this possible? I have looked around the web and there wasnt much helpful information on this topic (at least not any I could understand) and I hoped that you could help. --edit To clarify: What I want to see in the finished document is something like this: the dna sequence is CTAAAGAAAACAGGACGATTAGATGAGCTTGAGAAAGCCATCACCACTCA AATACTAAATGTGTTACCATACCAAGCACTTGCTCTGAAATTTGGGGACTGAGTACACCAAATACGATAG ATCAGTGGGATACAACAGGCCTTTACAGCTTCTCTGAACAAACCAGGTCTCTTGATGGTCGTCTCCAGGT ATCCCATCGAAAAGGATTGCCACATGTTATATATTGCCGATTATGGCGCTGGCCTGATCTTCACAGTCAT CATGAACTCAAGGCAATTGAAAACTGCGAATATGCTTTTAATCTTAAAAAGGATGAAGTATGTGTAAACC CTTACCACTATCAGAGAGTTGAGACACCAGTTTTGCCTCCAGTATTAGTGCCCCGACACACCGAGATCCT AACAGAACTTCCGCCTCTGGATGACTATACTCACTCCATTCCAGAAAACACTAACTTCCCAGCAGGAATT just plain linebreaks, without any hyphens. The DNA sequence will be one long string without any spaces or anything but it can break at any point. This is why my idea was to inesert a "\discretionary{}{}{}" after every character, so that it can break at any point without inserting any hyphens.

    Read the article

  • Vim: Making Auto-Completion Smarter

    - by Rafid K. Abdullah
    I use ctags, taglist, etc., to have auto completion in Vim. However, it is very limited compared to Visual Studio intellisense or Eclipse auto-completion. I am wondering whether it is possible to tune Vim to: Show auto-completion whenever . or - are typed. But only after some text that might be a variable (e.g. avoid showing auto completion after a number). Show function parameters when ( is typed. Stop removing the auto completion list when some delete all characters after . or -: When I enter a variable name, then press . or - to search for a certain member, I frequently have to delete all the characters I type after the . or -, but this makes Vim hide the auto completion list. I would like to keep it visible unless I press Esc. Showing related auto completion: When I type a variable and press ^X ^O, it usually shows me all the tags in the ctags file. I would like to have it showing only the tags related to the variable. Thanks for the help. EDIT: Some people are voting for this question, but no body seems to know the answer. So just wanted to mention that you don't have to provide a complete answer; partial answers to any of the mentioned points would be good also.

    Read the article

  • git changes modification time of files

    - by tanascius
    In the GitFaq I can read, that Git sets the current time as the timestamp on every file it modifies, but only those. However, I tried this command sequence (EDIT: added complete command sequence) $ git init test && cd test Initialized empty Git repository in d:/test/.git/ exxxxxxx@wxxxxxxx /d/test (master) $ touch filea fileb exxxxxxx@wxxxxxxx /d/test (master) $ git add . exxxxxxx@wxxxxxxx /d/test (master) $ git commit -m "first commit" [master (root-commit) fcaf171] first commit 0 files changed, 0 insertions(+), 0 deletions(-) create mode 100644 filea create mode 100644 fileb exxxxxxx@wxxxxxxx /d/test (master) $ ls -l > filea exxxxxxx@wxxxxxxx /d/test (master) $ touch fileb -t 200912301000 exxxxxxx@wxxxxxxx /d/test (master) $ ls -l total 1 -rw-r--r-- 1 exxxxxxx Administ 132 Feb 12 18:36 filea -rw-r--r-- 1 exxxxxxx Administ 0 Dec 30 10:00 fileb exxxxxxx@wxxxxxxx /d/test (master) $ git status -a warning: LF will be replaced by CRLF in filea # On branch master warning: LF will be replaced by CRLF in filea # Changes to be committed: # (use "git reset HEAD <file>..." to unstage) # # modified: filea # exxxxxxx@wxxxxxxx /d/test (master) $ git checkout . exxxxxxx@wxxxxxxx /d/test (master) $ ls -l total 0 -rw-r--r-- 1 exxxxxxx Administ 0 Feb 12 18:36 filea -rw-r--r-- 1 exxxxxxx Administ 0 Feb 12 18:36 fileb Now my question: Why did git change the timestamp of file fileb? I'd expect the timestamp to be unchanged. Are my commands causing a problem? Maybe it is possible to do something like a git checkout . --modified instead? I am using git version 1.6.5.1.1367.gcd48 under mingw32/windows xp.

    Read the article

< Previous Page | 615 616 617 618 619 620 621 622 623 624 625 626  | Next Page >