Search Results

Search found 21433 results on 858 pages for 'query execution plans'.

Page 325/858 | < Previous Page | 321 322 323 324 325 326 327 328 329 330 331 332  | Next Page >

  • Perform Grouping of Resultset in Code

    - by NinjaBomb
    Stackoverflowers, I have a resultset from a SQL query in the form of: Category Column2 Column3 A 2 3.50 A 3 2 B 3 2 B 1 5 ... I need to group the resultset based on the Category column and sum the values for Column2 and Column3. I have to do it in code because I cannot perform the grouping in the SQL query that gets the data due to the complexity of the query (long story). This grouped data will then be displayed in a table. I have it working for specific set of values in the Category column, but I would like a solution that would handle any possible values that appear in the Category column. I know there has to be a straightforward, efficient way to do it but I cannot wrap my head around it right now. How would you accomplish it?

    Read the article

  • LINQ expression precedence with Skip(), Take() and OrderBy()

    - by Robert Koritnik
    I'm using LINQ to Entities and display paged results. But I'm having issues with the combination of Skip(), Take() and OrderBy() calls. Everything works fine, except that OrderBy() is assigned too late. It's executed after result set has been cut down by Skip() and Take(). So each page of results has items in order. But ordering is done on a page handful of data instead of ordering of the whole set and then limiting those records with Skip() and Take(). How do I set precedence with these statements? My example (simplified) var query = ctx.EntitySet.Where(/* filter */).OrderBy(/* expression */); int total = query.Count(); var result = query.Skip(n).Take(x).ToList();

    Read the article

  • Understanding the 'High Performance' meaning in Extreme Transaction Processing

    - by kyap
    Despite my previous blogs entries on SOA/BPM and Identity Management, the domain where I'm the most passionated is definitely the Extreme Transaction Processing, commonly called XTP.I came across XTP back to 2007 while I was still FMW Product Manager in EMEA. At that time Oracle acquired a company called Tangosol, which owned an unique product called Coherence that we renamed to Oracle Coherence. Beside this innovative renaming of the product, to be honest, I didn't know much about it, except being a "distributed in-memory cache for Extreme Transaction Processing"... not very helpful still.In general when people doesn't fully understand a technology or a concept, they tend to find some shortcuts, either correct or not, to justify their lack-of understanding... and of course I was part of this category of individuals. And the shortcut was "Oracle Coherence Cache helps to improve Performance". Excellent marketing slogan... but not very meaningful still. By chance I was able to get away quickly from that group in July 2007* at Thames Valley Park (UK), after I attended one of the most interesting workshops, in my 10 years career in Oracle, delivered by Brian Oliver. The biggest mistake I made was to assume that performance improvement with Coherence was related to the response time. Which can be considered as legitimus at that time, because after-all caches help to reduce latency on cached data access, hence reduce the response-time. But like all caches, you need to define caching and expiration policies, thinking about the cache-missed strategy, and most of the time you have to re-write partially your application in order to work with the cache. At a result, the expected benefit vanishes... so, not very useful then?The key mistake I made was my perception or obsession on how performance improvement should be driven, but I strongly believe this is still a common problem to most of the developers. In fact we all know the that the performance of a system is generally presented by the Capacity (or Throughput), with the 2 important dimensions Speed (response-time) and Volume (load) :Capacity (TPS) = Volume (T) / Speed (S)To increase the Capacity, we can either reduce the Speed(in terms of response-time), or to increase the Volume. However we tend to only focus on reducing the Speed dimension, perhaps it is more concrete and tangible to measure, and nicer to present to our management because there's a direct impact onto the end-users experience. On the other hand, we assume the Volume can be addressed by the underlying hardware or software stack, so if we need more capacity (scale out), we just add more hardware or software. Unfortunately, the reality proves that IT is never as ideal as we assume...The challenge with Speed improvement approach is that it is generally difficult and costly to make things already fast... faster. And by adding Coherence will not necessarily help either. Even though we manage to do so, the Capacity can not increase forever because... the Speed can be influenced by the Volume. For all system, we always have a performance illustration as follow: In all traditional system, the increase of Volume (Transaction) will also increase the Speed (Response-Time) as some point. The reason is simple: most of the time the Application logics were not designed to scale. As an example, if you have a while-loop in your application, it is natural to conceive that parsing 200 entries will require double execution-time compared to 100 entries. If you need to "Speed-up" the execution, you can only upgrade your hardware (scale-up) with faster CPU and/or network to reduce network latency. It is technically limited and economically inefficient. And this is exactly where XTP and Coherence kick in. The primary objective of XTP is about designing applications which can scale-out for increasing the Volume, by applying coding techniques to keep the execution-time as constant as possible, independently of the number of runtime data being manipulated. It is actually not just about having an application running as fast as possible, but about having a much more predictable system, with constant response-time and linearly scale, so we can easily increase throughput by adding more hardwares in parallel. It is in general combined with the Low Latency Programming model, where we tried to optimize the network usage as much as possible, either from the programmatic angle (less network-hoops to complete a task), and/or from a hardware angle (faster network equipments). In this picture, Oracle Coherence can be considered as software-level XTP enabler, via the Distributed-Cache because it can guarantee: - Constant Data Objects access time, independently from the number of Objects and the Coherence Cluster size - Data Objects Distribution by Affinity for in-memory data grouping - In-place Data Processing for parallel executionTo summarize, Oracle Coherence is indeed useful to improve your application performance, just not in the way we commonly think. It's not about the Speed itself, but about the overall Capacity with Extreme Load while keeping consistant Speed. In the future I will keep adding new blog entries around this topic, with some sample codes experiences sharing that I capture in the last few years. In the meanwhile if you want to know more how Oracle Coherence, I strongly suggest you to start with checking how our worldwide customers are using Oracle Coherence first, then you can start playing with the product through our tutorial.Have Fun !

    Read the article

  • Problem with Sphinx resultset larger than 16 MB in MySQL

    - by gmemon
    Hello All, I am accessing a large indexed text dataset using sphinxse via MySQL. The size of resultset is on the order of gigabytes. However, I have noticed that MySQL stops the query with following error whenever the dataset is larger than 16MB: 1430 (HY000): There was a problem processing the query on the foreign data source. Data source error: bad searchd response length (length=16777523) length shows the length of resultset that offended MySQL. I have tried the same query with Sphinx's standalone search program. It works fine. I have tried all possible variables in both MySQL and Sphinx, but nothing is helping. I am using Sphinx 0.9.9 rc-2 and MySQL 5.1.46. Thanks

    Read the article

  • Linq like or other construction

    - by Yauhen Kavalenka
    I have DB oracle in my solution. I want to have some results in this query. Query example: select * from doctor where doctor.name like '%IVANOV_A%'; But if i do it at LINQ i cannot get any result. from p in repository.Doctor.Where(x => x.Name.ToLower().Containsname)) select p; Where 'name' is variable of string parameter. Web layout request next string: "Ivanov a" or "A Ivanov" But i suggest for user choose you pattetn for query. How i can to get "patient by name" if name consist of "First name" and "Last name" but user doesn't know your doctor's full name?

    Read the article

  • Hos to group the complex list objects by using Linq

    - by Daoming Yang
    I want to select and group the products, and rank them by the number of times they occur. For example, I have an OrderList each of order object has a OrderProductVariantList(OrderLineList), and each of OrderProductVariant object has ProductVariant, and then the ProductVariant object will have a Product object which contains product information. A friend helped me with the following code. It could be compiled, but it did not return any value/result. I used the watch window for the query and it gave me "The name 'query' does not exist in the current context". Can anyone help me? Many thanks. var query = orderList.SelectMany( o => o.OrderLineList ) // results in IEnumerable<OrderProductVariant> .Select( opv => opv.ProductVariant ) .Select( pv => p.Product ) .GroupBy( p => p ) .Select( g => new { Product = g.Key, Count = g.Count() });

    Read the article

  • A Knights Tale

    - by Phil Factor
    There are so many lessons to be learned from the story of Knight Capital losing nearly half a billion dollars as a result of a deployment gone wrong. The Knight Capital Group (KCG N) was an American global financial services firm engaging in market making, electronic execution, and institutional sales and trading. According to the recent order (File No.3.15570) against Knight Capital by U.S. Securities and Exchange Commission?, Knight had, for many years used some software which broke up incoming “parent” orders into smaller “child” orders that were then transmitted to various exchanges or trading venues for execution. A tracking ‘cumulative quantity’ function counted the number of ‘child’ orders and stopped the process once the total of child orders matched the ‘parent’ and so the parent order had been completed. Back in the mists of time, some code had been added to it  which was excuted if a particular flag was set. It was called ‘power peg’ and seems to have had a similar design and purpose, but, one guesses, would have shared the same tracking function. This code had been abandoned in 2003, but never deleted. In 2005, The tracking function was moved to an earlier point in the main process. It would seem from the account that, from that point, had that flag ever been set, the old ‘Power Peg’ would have been executed like Godzilla bursting from the ice, making child orders without limit without any tracking function. It wasn’t, presumably because the software that set the flag was removed. In 2012, nearly a decade after ‘Power Peg’ was abandoned, Knight prepared a new module to their software to cope with the imminent Retail Liquidity Program (RLP) for the New York Stock Exchange. By this time, the flag had remained unused and someone made the fateful decision to reuse it, and replace the old ‘power peg’ code with this new RLP code. Had the two actions been done together in a single automated deployment, and the new deployment tested, all would have been well. It wasn’t. To quote… “Beginning on July 27, 2012, Knight deployed the new RLP code in SMARS in stages by placing it on a limited number of servers in SMARS on successive days. During the deployment of the new code, however, one of Knight’s technicians did not copy the new code to one of the eight SMARS computer servers. Knight did not have a second technician review this deployment and no one at Knight realized that the Power Peg code had not been removed from the eighth server, nor the new RLP code added. Knight had no written procedures that required such a review.” (para 15) “On August 1, Knight received orders from broker-dealers whose customers were eligible to participate in the RLP. The seven servers that received the new code processed these orders correctly. However, orders sent with the repurposed flag to the eighth server triggered the defective Power Peg code still present on that server. As a result, this server began sending child orders to certain trading centers for execution. Because the cumulative quantity function had been moved, this server continuously sent child orders, in rapid sequence, for each incoming parent order without regard to the number of share executions Knight had already received from trading centers. Although one part of Knight’s order handling system recognized that the parent orders had been filled, this information was not communicated to SMARS.” (para 16) SMARS routed millions of orders into the market over a 45-minute period, and obtained over 4 million executions in 154 stocks for more than 397 million shares. By the time that Knight stopped sending the orders, Knight had assumed a net long position in 80 stocks of approximately $3.5 billion and a net short position in 74 stocks of approximately $3.15 billion. Knight’s shares dropped more than 20% after traders saw extreme volume spikes in a number of stocks, including preferred shares of Wells Fargo (JWF) and semiconductor company Spansion (CODE). Both stocks, which see roughly 100,000 trade per day, had changed hands more than 4 million times by late morning. Ultimately, Knight lost over $460 million from this wild 45 minutes of trading. Obviously, I’m interested in all this because, at one time, I used to write trading systems for the City of London. Obviously, the US SEC is in a far better position than any of us to work out the failings of Knight’s IT department, and the report makes for painful reading. I can’t help observing, though, that even with the breathtaking mistakes all along the way, that a robust automated deployment process that was ‘all-or-nothing’, and tested from soup to nuts would have prevented the disaster. The report reads like a Greek Tragedy. All the way along one wants to shout ‘No! not that way!’ and ‘Aargh! Don’t do it!’. As the tragedy unfolds, the audience weeps for the players, trapped by a cruel fate. All application development and deployment requires defense in depth. All IT goes wrong occasionally, but if there is a culture of defensive programming throughout, the consequences are usually containable. For financial systems, these defenses are required by statute, and ignored only by the foolish. Knight’s mistakes weren’t made by just one hapless sysadmin, but were progressive errors by an  IT culture spanning at least ten years.  One can spell these out, but I think they’re obvious. One can only hope that the industry studies what happened in detail, learns from the mistakes, and draws the right conclusions.

    Read the article

  • Anchoring the Action URL of a Form

    - by John
    Hello, I am using a function that leads users to a file called "comments2.php": <form action="http://www...com/.../comments/comments2.php" method="post"> On comments2.php, data passed over from the form is inserted into MySQL: $query = sprintf("INSERT INTO comment VALUES (NULL, %d, %d, '%s', NULL)", $uid, $subid, $comment); mysql_query($query) or die(mysql_error()); Then, later in comments2.php, I am using a query that loops through entries meeting certain criteria. The loop contains a row with the following information: echo '<td rowspan="3" class="commentname1" id="comment-' . $row["commentid"] . '">'.stripslashes($row["comment"]).'</td>'; For the function above, I would like the URL to be anchored by the highest value of "commentid" for id="comment-' . $row["commentid"] . '" How can this be done? Thanks in advance, John

    Read the article

  • How can I have MySQL write outfiles as a different user?

    - by David Locke
    I'm working with a MySQL query that writes into an outfile. I run this query once every day or two and so I want to be able to remove the outfile without having to resort to su or sudo. The only way I can think of making that happen is to have the outfile written as owned by someone other than the mysql user. Is this possible? Edit: I am not redirecting output to a file, I am using the INTO OUTFILE part of a select query to output to a file. If it helps: mysql --version mysql Ver 14.12 Distrib 5.0.32, for pc-linux-gnu (x86_64) using readline 5.2

    Read the article

  • How to save bytes to an image and access it from Bottle

    - by Graham Smith
    I'm working on an API wrapper for Snapchat using Python and Bottle, but in order to return the file (retrieved by the Python script) I have to save the bytes (returned by Snapchat) to a .jpg file. I'm not quite sure how I will do this and still be able to access the file so that it can be returned. Here's what I have so far, but it returns a 404. @route('/image') def image(): username = request.query.username token = request.query.auth_token img_id = request.query.id return get_blob(username, token, img_id) def get_blob(usr, token, img_id): # Form URL and download encrypted "blob" blob_url = "https://feelinsonice.appspot.com/ph/blob?id={}".format(img_id) blob_url += "&username=" + usr + "&timestamp=" + str(timestamp()) + "&req_token=" + req_token(token) enc_blob = requests.get(blob_url).content # Save decrypted image FileUpload.save('/images/' + img_id + '.jpg') img = open('images/' + img_id + '.jpg', 'wb') img.write(decrypt(enc_blob)) img.close() return static_file(img_id + '.jpg', root='/images/')

    Read the article

  • ASP.NET and SQL server with huge data sets

    - by Jake Petroules
    I am developing a web application in ASP.NET and on one page I am using a ListView with paging. As a test I populated the table it draws from with 6 million rows. The table and a schema-bound view based off it have all the necessary indexes and executing the query in SQL Server Management Studio with SELECT TOP 5 returned in < 1 second as expected. But on the ASP.NET page, with the same query, it seems to be selecting all 6 million rows without any limit. Shouldn't the paging control limit the query to return only N rows rather than the entire data set? How can I use these ASP.NET controls to handle huge data sets with millions of records? Does SELECT [columns] FROM [tablename] quite literally mean that for the ListView, and it doesn't actually inject a TOP <n> and does all the pagination at the application level rather than the database level?

    Read the article

  • Linq to Entities, checking for specific foreign key id?

    - by AaronLS
    I am trying to get rows where the foreign key ParentID == 0, and this is what I am trying but I get a NotSupportedException because it can't translate the ArrayIndex [0]: IQueryable<ApplicationSectionNode> topLevelNodeQuery = from n in uacEntitiesContext.ApplicationSectionNodeSet where (int)n.Parent.EntityKey.EntityKeyValues[0].Value == 0 orderby n.Sequence select n; So I need to pull that ArrayIndex out of the query so that the runtime can successfully translate the query. I'm not sure how to do that though. How does one query a specific object via it's primary key or set of objects via foreign key? Edit: Note that there is not actually a row in the table with NodeId == 0, the 0 is a magic value(not my idea) to indicate top level nodes. So I can't do n.Parent.NodeId == 0

    Read the article

  • How to pass parameters to report model in Reporting Services

    - by savras
    I'm developing report in RS that show top N customers based on some criteria. It also allows to select number of customers and period of time. Is it possible to do it by using report model? Thing that it seems to be difficult is how to pass parameters determined by user. Another thing that in my oppinion is very disappointing is that i cannot use SQL query as dataset query, because it uses odd and elaborate XML. Although report model items seem to map its fields to query or table fields. I m concerning using report models because i need to provide uniform data model (the same tables and fields) for more or less different database schemas. It would be very nice if somebody would explain what can be done with report models and what can not.

    Read the article

  • dojo connect mouseover and mouseout

    - by peirix
    When setting up dojo connections to onmouseover and onmouseout, and then adding content on mouseover, dojo fires the onmouseout event at once, since there is new content. Example: dojo.query(".star").parent().connect("onmouseover", function() { dojo.query("span", this).addContent("<img src='star-hover.jpg'>"); }).connect("onmouseout", function() { dojo.destroy(dojo.query("img", this)[0]); }); The parent() is a <td>, and the .star is a span. I want to add the hover image whenever the user hovers the table cell. It works as long as the cursor doesn't hover the image, because that will result in some serious blinking. Is this deliberate? And is there a way around it?

    Read the article

  • Cucumber testing with rails on mongoid-gridfs

    - by Deepak Lamichhane
    I am getting this weird error while running cucumber test: ERROR Mongo::OperationFailure: Database command 'filemd5' failed: {"errmsg"="exception: best guess plan requested, but scan and order required: query: { files_id: ObjectId('4d1abab3a15c84139c00006e') } order: { files_id: 1, n: 1 } choices: { $natural: 1 } ", "code"=13284, "ok"=0.0} I have a list of similar scenarios, where first scenario passes but all the other following scenario fails. I searched for it and I found that there is problem with indexing. But, I am not sure about what query to write. Furthermore, I can add the query on the mongo of the development. I want to make sure that the indexing is done in test too. If anyone has any idea on this, feel free.

    Read the article

  • Transaction issue in java with hibernate - latest entries not pulled from database

    - by Gearóid
    Hi, I'm having what seems to be a transactional issue in my application. I'm using Java 1.6 and Hibernate 3.2.5. My application runs a monthly process where it creates billing entries for a every user in the database based on their monthly activity. These billing entries are then used to create Monthly Bill object. The process is: Get users who have activity in the past month Create the relevant billing entries for each user Get the set of billing entries that we've just created Create a Monthly Bill based on these entries Everything works fine until Step 3 above. The Billing Entries are correctly created (I can see them in the database if I add a breakpoint after the Billing Entry creation method), but they are not pulled out of the database. As a result, an incorrect Monthly Bill is generated. If I run the code again (without clearing out the database), new Billing Entries are created and Step 3 pulls out the entries created in the first run (but not the second run). This, to me, is very confusing. My code looks like the following: for (User user : usersWithActivities) { createBillingEntriesForUser(user.getId()); userBillingEntries = getLastMonthsBillingEntriesForUser(user.getId()); createXMLBillForUser(user.getId(), userBillingEntries); } The methods called look like the following: @Transactional public void createBillingEntriesForUser(Long id) { UserManager userManager = ManagerFactory.getUserManager(); User user = userManager.getUser(id); List<AccountEvent> events = getLastMonthsAccountEventsForUser(id); BillingEntry entry = new BillingEntry(); if (null != events) { for (AccountEvent event : events) { if (event.getEventType().equals(EventType.ENABLE)) { Calendar cal = Calendar.getInstance(); Date eventDate = event.getTimestamp(); cal.setTime(eventDate); double startDate = cal.get(Calendar.DATE); double numOfDaysInMonth = cal.getActualMaximum(Calendar.DAY_OF_MONTH); double numberOfDaysInUse = numOfDaysInMonth - startDate; double fractionToCharge = numberOfDaysInUse/numOfDaysInMonth; BigDecimal amount = BigDecimal.valueOf(fractionToCharge * Prices.MONTHLY_COST); amount.scale(); entry.setAmount(amount); entry.setUser(user); entry.setTimestamp(eventDate); userManager.saveOrUpdate(entry); } } } } @Transactional public Collection<BillingEntry> getLastMonthsBillingEntriesForUser(Long id) { if (log.isDebugEnabled()) log.debug("Getting all the billing entries for last month for user with ID " + id); //String queryString = "select billingEntry from BillingEntry as billingEntry where billingEntry>=:firstOfLastMonth and billingEntry.timestamp<:firstOfCurrentMonth and billingEntry.user=:user"; String queryString = "select be from BillingEntry as be join be.user as user where user.id=:id and be.timestamp>=:firstOfLastMonth and be.timestamp<:firstOfCurrentMonth"; //This parameter will be the start of the last month ie. start of billing cycle SearchParameter firstOfLastMonth = new SearchParameter(); firstOfLastMonth.setTemporalType(TemporalType.DATE); //this parameter holds the start of the CURRENT month - ie. end of billing cycle SearchParameter firstOfCurrentMonth = new SearchParameter(); firstOfCurrentMonth.setTemporalType(TemporalType.DATE); Query query = super.entityManager.createQuery(queryString); query.setParameter("firstOfCurrentMonth", getFirstOfCurrentMonth()); query.setParameter("firstOfLastMonth", getFirstOfLastMonth()); query.setParameter("id", id); List<BillingEntry> entries = query.getResultList(); return entries; } public MonthlyBill createXMLBillForUser(Long id, Collection<BillingEntry> billingEntries) { BillingHistoryManager manager = ManagerFactory.getBillingHistoryManager(); UserManager userManager = ManagerFactory.getUserManager(); MonthlyBill mb = new MonthlyBill(); User user = userManager.getUser(id); mb.setUser(user); mb.setTimestamp(new Date()); Set<BillingEntry> entries = new HashSet<BillingEntry>(); entries.addAll(billingEntries); String xml = createXmlForMonthlyBill(user, entries); mb.setXmlBill(xml); mb.setBillingEntries(entries); MonthlyBill bill = (MonthlyBill) manager.saveOrUpdate(mb); return bill; } Help with this issue would be greatly appreciated as its been wracking my brain for weeks now! Thanks in advance, Gearoid.

    Read the article

  • Safe to KILL a mysql process REPLACEing records in a large myisam table?

    - by threecheeseopera
    I have a REPLACE query running for a few days now on a few MyISAM tables, the largest having 20+million records. I need it to stop. It is, basically: REPLACE INTO really_large_table (a,b,c,d) SELECT e,f,g,h FROM big_table INNER JOIN huge_table ON big_table.x LIKE CONCAT('%', huge_table.y, '%'); I need to KILL it, and I am worried that I may corrupt really_large_table. Because the sub-query itself takes a significant amount of time, the REPLACEing probably occurs (relatively) infrequently; if this is true, does this make it less likely for the data to become corrupted? For the curious, here is the SO question asked about the query I am trying to kill.

    Read the article

  • How Do I insert Data in a table From the Model MVC?

    - by user54197
    I have data coming into my Model, how do I setup to insert the data in a table? public string Name { get; set; } public string Address { get; set; } public string City { get; set; } public string State { get; set; } public string Zip { get; set; } public Info() { using (SqlConnection connect = new SqlConnection(connections)) { string query = "Insert Into Personnel_Data (Name, StreetAddress, City, State, Zip, HomePhone, WorkPhone)" + "Values('" + Name + "','" + Address + "','" + City + "','" + State + "','" + Zip + "','" + ContactHPhone + "','" + ContactWPhone + "')"; SqlCommand command = new SqlCommand(query, connect); connect.Open(); command.ExecuteNonQuery(); } } The Name, Address, City, and so on is null when the query is being run. How do I set this up?

    Read the article

  • Oracle - timed sampling from v$session_longops

    - by FrustratedWithFormsDesigner
    I am trying to track performance on some procedures that run too slow (and seem to keep getting slower). I am using v$session_longops to track how much work has been done, and I have a query (sofar/((v$session_longops.LAST_UPDATE_TIME-v$session_longops.start_time)*24*60*60)) that tells me the rate at which work is being done. What I'd like to be able to do is capture the rate at which work is being done and how it changes over time. Right now, I just re-execute the query manually, and then copy/paste to Excel. Not very optimal, especially when the phone rings or something else happens to interrupt my sampling frequency. Is there a way to have script in SQL*Plus run a query evern n seconds, spool the results to a file, and then continue doing this until the job ends? (Oracle 10g)

    Read the article

  • What is the performance impact of tracing in C# and ASP.NET?

    - by SkippyFire
    I found this in some production login code I was looking at recently... HttpContext.Current.Trace.Write(query + ": " + username + ", " + password)); ...where query is a short SQL query to grab matching users. Does this have any sort of performance impact? I assume its very small. Also, what is the purpose of this exact type of trace, using the HTTP Context? Where does this data get traced to? Thanks in advance!

    Read the article

  • MySQL: Column Contains Word From List of Words

    - by mellowsoon
    I have a list of words. Lets say they are 'Apple', 'Orange', and 'Pear'. I have rows in the database like this: ------------------------------------------------ |author_id | content | ------------------------------------------------ | 54 | I ate an apple for breakfast. | | 63 | Going to the store. | | 12 | Should I wear the orange shirt? | ------------------------------------------------ I'm looking for a query on an InnoDB table that will return the 1st and 3rd row, because the content column contains one or more words from my list. I know I could query the table once for each word in my list, and use LIKE and the % wildcard character, but I'm wondering if there is a single query method for such a thing?

    Read the article

  • How to find loginname, database username, or roles of sqlserver domain user who doesn't have their own login?

    - by Adam Butler
    I have created a login and database user called "MYDOMAIN\Domain Users". I need to find what roles a logged on domain user has but all the calls to get the current user return the domain username eg. "MYDOMAIN\username" not the database username eg. "MYDOMAIN\Domain Users". For example, this query returns "MYDOMAIN\username" select original_login(),suser_name(), suser_sname(), system_user, session_user, current_user, user_name() And this query returns 0 select USER_ID() I want the username to query database_role_members is there any fuction that will return it or any other way I can get the current users roles?

    Read the article

  • Search SQL Question Between Related Two Tables

    - by mTuran
    Hi, I am writing some kind of search engine for my web application and i have a problem. I have 2 tables first of is projects table: PROJECTS TABLE id int(11) NO PRI NULL auto_increment employer_id int(11) NO MUL NULL project_title varchar(100) NO MUL NULL project_description text NO NULL project_budget int(11) NO NULL project_allowedtime int(11) NO NULL project_deadline datetime NO NULL total_bids int(11) NO NULL average_bid int(11) NO NULL created datetime NO MUL NULL active tinyint(1) NO MUL NULL PROJECTS_SKILLS TABLE project_id int(11) NO MUL NULL skill_id int(11) NO MUL NULL For example: I want ask this query to database: 1-) Skills are 5 and 7. 2-) Order results by created 3-) project title contains "php" word. 4-) Returned rows should contain projects.* columuns. 5-) Projects should be distinct(i don't want same projects in return of query). Please write sql query that ensure these conditions. Thank You.

    Read the article

  • Is an ORM redundant with a NoSQL API?

    - by Earlz
    Hello, with MongoDB (and I assume other NoSQL database APIs worth their salt) the ways of querying the database are much more simplistic than SQL. There is no tedious SQL queries to generate and such. For instance take this from mongodb-csharp: using MongoDB.Driver; Mongo db = new Mongo(); db.Connect(); //Connect to localhost on the default port. Document query = new Document(); query["field1"] = 10; Document result = db["tests"]["reads"].FindOne(query); db.Disconnect(); How could an ORM even simplify that? Is an ORM or other "database abstraction device" required on top of a decent NoSQL API?

    Read the article

  • php push 2d array into mysql

    - by john
    Hay All, I cant seem to get my head around this dispite the number to examples i read. Basically I have a 2d array and want to insert it into MySQL. The array contains a few strings. I cant get the following to work... $value = addslashes(serialize($temp3));//temp3 is my 2d array, do i need to use keys? (i am not at the moment) $query = "INSERT INTO table sip (id,keyword,data,flags) VALUES(\"$value\")"; mysql_query($query) or die("Failed Query"); Thanks Guys,

    Read the article

< Previous Page | 321 322 323 324 325 326 327 328 329 330 331 332  | Next Page >