Search Results

Search found 17627 results on 706 pages for 'hierarchical query'.

Page 526/706 | < Previous Page | 522 523 524 525 526 527 528 529 530 531 532 533  | Next Page >

  • MySQL SELECT MAX multiple tables : foreach parent return eldest son's picture

    - by Guillermo
    **Table parent** parentId | name **Table children** childId | parentId | pictureId | age **Table childrenPictures** pictureId | imgUrl no i would like to return all parent names with their eldest son's picture (only return parents that have children, and only consider children that have pictures) so i thought of something like : SELECT c.childId AS childId, p.name AS parentName, cp.imgUrl AS imgUrl, MAX(c.age) AS age FROM parent AS p RIGHT JOIN children AS c ON (p.parentId = c.parentId) RIGHT JOIN childrenPictures AS cp ON (c.pictureId = cp.pictureId)) GROUP BY p.name This query will return each parent's eldest son's age, but the childId will not correspond to the eldest sons id, so the output does not show the right sons picture. Well if anyone has a hint i'd appreciate very much Thank you very much, G

    Read the article

  • Finding first alphabetic character in a DB2 database field

    - by Paul Alan Taylor
    I'm doing a bit of work which requires me to truncate DB2 character-based fields. Essentially, I need to discard all text which is found at or after the first alphabetic character. e.g. 102048994BLAHBLAHBLAH becomes:- 102048994 In SQL Server, this would be a doddle - PATINDEX would swoop in and save the day. Much celebration would ensue. My problem is that I need to do this in DB2. Worse, the result needs to be used in a join query, also in DB2. I can't find an easy way to do this. Is there a PATINDEX equivalent in DB2? Is there another way to solve this problem? If need be, I'll hardcode 26 chained LOCATE functions to get my result, but if there is a better way, I am all ears.

    Read the article

  • Using Phing's dbdeploy task with transactions

    - by Gordon
    I am using Phing's dbdeploy task to manage my database schema. This is working fine, as long as there is no errors in my delta file. However, if there is an error, dbdeploy will just run the delta files up to the query with the error and then abort. This causes me some frustration, because I have to manually rollback the entry in the changelog table then. If I don't, dbdeploy will assume the migration was successful on a subsequent try. So the question is, there any way to get dbdeploy use transactions?

    Read the article

  • What does "?ref=ts" mean in a FACEBOOK APP url

    - by jozecuervo
    When Facebook drives traffic to an app, they often append &ref=whatever to the query string. This is useful for figuring out which integration points are working or not. I've figured out what some of these mean. For example: ref=bookmarks - the user clicked on a bookmark. ref=game_my_recent - the user clicked on the upper portion of the games dashboard. Does anyone know what "ref=ts" means? It accounts for a ton of traffic. I've viewed source on pages all over common FB pages and cannot find a match for ant piece of content generated by any of my apps. Same question, posted by me on the FB DEV FORUM: http://forum.developers.facebook.com/viewtopic.php?id=54866 I'm thinking I'll get better answers here ;) Thanks people, Jose

    Read the article

  • how to fire the function event on PageLoad event of XtraReport ?

    - by ahmed
    I have a situation, I am working on a XtraReport where I am pulling some data from table.Now I created a function and passing a parameter at runtime for the report to generate. But when I fire the function event from a Onclick Button the result is perfect but when I fire the same function event on the page load of the ReportViewer ..I do not get the result. About function , i am creating dataset, and adapter with Sqlquery from runtime passing the parameter value from a session in the query and binding the data to the labels. But as I said I want the result on the pageload event of the reportviewer.

    Read the article

  • How to log subsonic3 sql

    - by bastos.sergio
    Hi, I'm starting to develop a new asp.net application based on subsonic3 (for queries) and log4net (for logs) and would like to know how to interface subsonic3 with log4net so that log4net logs the underlying sql used by subsonic. This is what I have so far: public static IEnumerable<arma_ocorrencium> ListArmasOcorrencia() { if (logger.IsInfoEnabled) { logger.Info("ListarArmasOcorrencia: start"); } var db = new BdvdDB(); var select = from p in db.arma_ocorrencia select p; var results = select.ToList<arma_ocorrencium>(); //Execute the query here if (logger.IsInfoEnabled) { // log sql here } if (logger.IsInfoEnabled) { logger.Info("ListarArmasOcorrencia: end"); } return results; }

    Read the article

  • Database design - alternatives for Entity Attribute Value (EAV)

    - by Bob
    Hi, see http://stackoverflow.com/questions/695752/product-table-many-kinds-of-product-each-product-has-many-parameters for similar topic. My question: i want to design a database, that will be used for a production facility of different types of products where each product has its own (number of) parameters. because i want the serial numbers to be in one tabel for overview purposes i have a problem with these different paraeters . One solution could be EAV, but it has its downsides, certainly because we have +- 5 products with every product +- 20.000 serial numbers (records). it looks a bit overkill to me... I just don't know how one could design a database so that you have an attribute in a mastertable that says: 'hey, you could find details of this record in THAT detail-table". 'in a way that you qould easely query the results) currenty i am using Visual Basic & Acces 2007. but i'm going to Visual Basic & MySQL. thanks for your help. Bob

    Read the article

  • ANTS CLR and Memory Profiler In Depth Review (Part 1 of 2 &ndash; CLR Profiler)

    - by ToStringTheory
    One of the things that people might not know about me, is my obsession to make my code as efficient as possible.  Many people might not realize how much of a task or undertaking that this might be, but it is surely a task as monumental as climbing Mount Everest, except this time it is a challenge for the mind…  In trying to make code efficient, there are many different factors that play a part – size of project or solution, tiers, language used, experience and training of the programmer, technologies used, maintainability of the code – the list can go on for quite some time. I spend quite a bit of time when developing trying to determine what is the best way to implement a feature to accomplish the efficiency that I look to achieve.  One program that I have recently come to learn about – Red Gate ANTS Performance (CLR) and Memory profiler gives me tools to accomplish that job more efficiently as well.  In this review, I am going to cover some of the features of the ANTS profiler set by compiling some hideous example code to test against. Notice As a member of the Geeks With Blogs Influencers program, one of the perks is the ability to review products, in exchange for a free license to the program.  I have not let this affect my opinions of the product in any way, and Red Gate nor Geeks With Blogs has tried to influence my opinion regarding this product in any way. Introduction The ANTS Profiler pack provided by Red Gate was something that I had not heard of before receiving an email regarding an offer to review it for a license.  Since I look to make my code efficient, it was a no brainer for me to try it out!  One thing that I have to say took me by surprise is that upon downloading the program and installing it you fill out a form for your usual contact information.  Sure enough within 2 hours, I received an email from a sales representative at Red Gate asking if she could help me to achieve the most out of my trial time so it wouldn’t go to waste.  After replying to her and explaining that I was looking to review its feature set, she put me in contact with someone that setup a demo session to give me a quick rundown of its features via an online meeting.  After having dealt with a massive ordeal with one of my utility companies and their complete lack of customer service, Red Gates friendly and helpful representatives were a breath of fresh air, and something I was thankful for. ANTS CLR Profiler The ANTS CLR profiler is the thing I want to focus on the most in this post, so I am going to dive right in now. Install was simple and took no time at all.  It installed both the profiler for the CLR and Memory, but also visual studio extensions to facilitate the usage of the profilers (click any images for full size images): The Visual Studio menu options (under ANTS menu) Starting the CLR Performance Profiler from the start menu yields this window If you follow the instructions after launching the program from the start menu (Click File > New Profiling Session to start a new project), you are given a dialog with plenty of options for profiling: The New Session dialog.  Lots of options.  One thing I noticed is that the buttons in the lower right were half-covered by the panel of the application.  If I had to guess, I would imagine that this is caused by my DPI settings being set to 125%.  This is a problem I have seen in other applications as well that don’t scale well to different dpi scales. The profiler options give you the ability to profile: .NET Executable ASP.NET web application (hosted in IIS) ASP.NET web application (hosted in IIS express) ASP.NET web application (hosted in Cassini Web Development Server) SharePoint web application (hosted in IIS) Silverlight 4+ application Windows Service COM+ server XBAP (local XAML browser application) Attach to an already running .NET 4 process Choosing each option provides a varying set of other variables/options that one can set including options such as application arguments, operating path, record I/O performance performance counters to record (43 counters in all!), etc…  All in all, they give you the ability to profile many different .Net project types, and make it simple to do so.  In most cases of my using this application, I would be using the built in Visual Studio extensions, as they automatically start a new profiling project in ANTS with the options setup, and start your program, however RedGate has made it easy enough to profile outside of Visual Studio as well. On the flip side of this, as someone who lives most of their work life in Visual Studio, one thing I do wish is that instead of opening an entirely separate application/gui to perform profiling after launching, that instead they would provide a Visual Studio panel with the information, and integrate more of the profiling project information into Visual Studio.  So, now that we have an idea of what options that the profiler gives us, its time to test its abilities and features. Horrendous Example Code – Prime Number Generator One of my interests besides development, is Physics and Math – what I went to college for.  I have especially always been interested in prime numbers, as they are something of a mystery…  So, I decided that I would go ahead and to test the abilities of the profiler, I would write a small program, website, and library to generate prime numbers in the quantity that you ask for.  I am going to start off with some terrible code, and show how I would see the profiler being used as a development tool. First off, the IPrimes interface (all code is downloadable at the end of the post): interface IPrimes { IEnumerable<int> GetPrimes(int retrieve); } Simple enough, right?  Anything that implements the interface will (hopefully) provide an IEnumerable of int, with the quantity specified in the parameter argument.  Next, I am going to implement this interface in the most basic way: public class DumbPrimes : IPrimes { public IEnumerable<int> GetPrimes(int retrieve) { //store a list of primes already found var _foundPrimes = new List<int>() { 2, 3 }; //if i ask for 1 or two primes, return what asked for if (retrieve <= _foundPrimes.Count()) return _foundPrimes.Take(retrieve); //the next number to look at int _analyzing = 4; //since I already determined I don't have enough //execute at least once, and until quantity is sufficed do { //assume prime until otherwise determined bool isPrime = true; //start dividing at 2 //divide until number is reached, or determined not prime for (int i = 2; i < _analyzing && isPrime; i++) { //if (i) goes into _analyzing without a remainder, //_analyzing is NOT prime if (_analyzing % i == 0) isPrime = false; } //if it is prime, add to found list if (isPrime) _foundPrimes.Add(_analyzing); //increment number to analyze next _analyzing++; } while (_foundPrimes.Count() < retrieve); return _foundPrimes; } } This is the simplest way to get primes in my opinion.  Checking each number by the straight definition of a prime – is it divisible by anything besides 1 and itself. I have included this code in a base class library for my solution, as I am going to use it to demonstrate a couple of features of ANTS.  This class library is consumed by a simple non-MVVM WPF application, and a simple MVC4 website.  I will not post the WPF code here inline, as it is simply an ObservableCollection<int>, a label, two textbox’s, and a button. Starting a new Profiling Session So, in Visual Studio, I have just completed my first stint developing the GUI and DumbPrimes IPrimes class, so now I want to check my codes efficiency by profiling it.  All I have to do is build the solution (surprised initiating a profiling session doesn’t do this, but I suppose I can understand it), and then click the ANTS menu, followed by Profile Performance.  I am then greeted by the profiler starting up and already monitoring my program live: You are provided with a realtime graph at the top, and a pane at the bottom giving you information on how to proceed.  I am going to start by asking my program to show me the first 15000 primes: After the program finally began responding again (I did all the work on the main UI thread – how bad!), I stopped the profiler, which did kill the process of my program too.  One important thing to note, is that the profiler by default wants to give you a lot of detail about the operation – line hit counts, time per line, percent time per line, etc…  The important thing to remember is that this itself takes a lot of time.  When running my program without the profiler attached, it can generate the 15000 primes in 5.18 seconds, compared to 74.5 seconds – almost a 1500 percent increase.  While this may seem like a lot, remember that there is a trade off.  It may be WAY more inefficient, however, I am able to drill down and make improvements to specific problem areas, and then decrease execution time all around. Analyzing the Profiling Session After clicking ‘Stop Profiling’, the process running my application stopped, and the entire execution time was automatically selected by ANTS, and the results shown below: Now there are a number of interesting things going on here, I am going to cover each in a section of its own: Real Time Performance Counter Bar (top of screen) At the top of the screen, is the real time performance bar.  As your application is running, this will constantly update with the currently selected performance counters status.  A couple of cool things to note are the fact that you can drag a selection around specific time periods to drill down the detail views in the lower 2 panels to information pertaining to only that period. After selecting a time period, you can bookmark a section and name it, so that it is easy to find later, or after reloaded at a later time.  You can also zoom in, out, or fit the graph to the space provided – useful for drilling down. It may be hard to see, but at the top of the processor time graph below the time ticks, but above the red usage graph, there is a green bar. This bar shows at what times a method that is selected in the ‘Call tree’ panel is called. Very cool to be able to click on a method and see at what times it made an impact. As I said before, ANTS provides 43 different performance counters you can hook into.  Click the arrow next to the Performance tab at the top will allow you to change between different counters if you have them selected: Method Call Tree, ADO.Net Database Calls, File IO – Detail Panel Red Gate really hit the mark here I think. When you select a section of the run with the graph, the call tree populates to fill a hierarchical tree of method calls, with information regarding each of the methods.   By default, methods are hidden where the source is not provided (framework type code), however, Red Gate has integrated Reflector into ANTS, so even if you don’t have source for something, you can select a method and get the source if you want.  Methods are also hidden where the impact is seen as insignificant – methods that are only executed for 1% of the time of the overall calling methods time; in other words, working on making them better is not where your efforts should be focused. – Smart! Source Panel – Detail Panel The source panel is where you can see line level information on your code, showing the code for the currently selected method from the Method Call Tree.  If the code is not available, Reflector takes care of it and shows the code anyways! As you can notice, there does seem to be a problem with how ANTS determines what line is the actual line that a call is completed on.  I have suspicions that this may be due to some of the inline code optimizations that the CLR applies upon compilation of the assembly.  In a method with comments, the problem is much more severe: As you can see here, apparently the most offending code in my base library was a comment – *gasp*!  Removing the comments does help quite a bit, however I hope that Red Gate works on their counter algorithm soon to improve the logic on positioning for statistics: I did a small test just to demonstrate the lines are correct without comments. For me, it isn’t a deal breaker, as I can usually determine the correct placements by looking at the application code in the region and determining what makes sense, but it is something that would probably build up some irritation with time. Feature – Suggest Method for Optimization A neat feature to really help those in need of a pointer, is the menu option under tools to automatically suggest methods to optimize/improve: Nice feature – clicking it filters the call tree and stars methods that it thinks are good candidates for optimization.  I do wish that they would have made it more visible for those of use who aren’t great on sight: Process Integration I do think that this could have a place in my process.  After experimenting with the profiler, I do think it would be a great benefit to do some development, testing, and then after all the bugs are worked out, use the profiler to check on things to make sure nothing seems like it is hogging more than its fair share.  For example, with this program, I would have developed it, ran it, tested it – it works, but slowly. After looking at the profiler, and seeing the massive amount of time spent in 1 method, I might go ahead and try to re-implement IPrimes (I actually would probably rewrite the offending code, but so that I can distribute both sets of code easily, I’m just going to make another implementation of IPrimes).  Using two pieces of knowledge about prime numbers can make this method MUCH more efficient – prime numbers fall into two buckets 6k+/-1 , and a number is prime if it is not divisible by any other primes before it: public class SmartPrimes : IPrimes { public IEnumerable<int> GetPrimes(int retrieve) { //store a list of primes already found var _foundPrimes = new List<int>() { 2, 3 }; //if i ask for 1 or two primes, return what asked for if (retrieve <= _foundPrimes.Count()) return _foundPrimes.Take(retrieve); //the next number to look at int _k = 1; //since I already determined I don't have enough //execute at least once, and until quantity is sufficed do { //assume prime until otherwise determined bool isPrime = true; int potentialPrime; //analyze 6k-1 //assign the value to potential potentialPrime = 6 * _k - 1; //if there are any primes that divise this, it is NOT a prime number //using PLINQ for quick boost isPrime = !_foundPrimes.AsParallel() .Any(prime => potentialPrime % prime == 0); //if it is prime, add to found list if (isPrime) _foundPrimes.Add(potentialPrime); if (_foundPrimes.Count() == retrieve) break; //analyze 6k+1 //assign the value to potential potentialPrime = 6 * _k + 1; //if there are any primes that divise this, it is NOT a prime number //using PLINQ for quick boost isPrime = !_foundPrimes.AsParallel() .Any(prime => potentialPrime % prime == 0); //if it is prime, add to found list if (isPrime) _foundPrimes.Add(potentialPrime); //increment k to analyze next _k++; } while (_foundPrimes.Count() < retrieve); return _foundPrimes; } } Now there are definitely more things I can do to help make this more efficient, but for the scope of this example, I think this is fine (but still hideous)! Profiling this now yields a happy surprise 27 seconds to generate the 15000 primes with the profiler attached, and only 1.43 seconds without.  One important thing I wanted to call out though was the performance graph now: Notice anything odd?  The %Processor time is above 100%.  This is because there is now more than 1 core in the operation.  A better label for the chart in my mind would have been %Core time, but to each their own. Another odd thing I noticed was that the profiler seemed to be spot on this time in my DumbPrimes class with line details in source, even with comments..  Odd. Profiling Web Applications The last thing that I wanted to cover, that means a lot to me as a web developer, is the great amount of work that Red Gate put into the profiler when profiling web applications.  In my solution, I have a simple MVC4 application setup with 1 page, a single input form, that will output prime values as my WPF app did.  Launching the profiler from Visual Studio as before, nothing is really different in the profiler window, however I did receive a UAC prompt for a Red Gate helper app to integrate with the web server without notification. After requesting 500, 1000, 2000, and 5000 primes, and looking at the profiler session, things are slightly different from before: As you can see, there are 4 spikes of activity in the processor time graph, but there is also something new in the call tree: That’s right – ANTS will actually group method calls by get/post operations, so it is easier to find out what action/page is giving the largest problems…  Pretty cool in my mind! Overview Overall, I think that Red Gate ANTS CLR Profiler has a lot to offer, however I think it also has a long ways to go.  3 Biggest Pros: Ability to easily drill down from time graph, to method calls, to source code Wide variety of counters to choose from when profiling your application Excellent integration/grouping of methods being called from web applications by request – BRILLIANT! 3 Biggest Cons: Issue regarding line details in source view Nit pick – Processor time vs. Core time Nit pick – Lack of full integration with Visual Studio Ratings Ease of Use (7/10) – I marked down here because of the problems with the line level details and the extra work that that entails, and the lack of better integration with Visual Studio. Effectiveness (10/10) – I believe that the profiler does EXACTLY what it purports to do.  Especially with its large variety of performance counters, a definite plus! Features (9/10) – Besides the real time performance monitoring, and the drill downs that I’ve shown here, ANTS also has great integration with ADO.Net, with the ability to show database queries run by your application in the profiler.  This, with the line level details, the web request grouping, reflector integration, and various options to customize your profiling session I think create a great set of features! Customer Service (10/10) – My entire experience with Red Gate personnel has been nothing but good.  their people are friendly, helpful, and happy! UI / UX (8/10) – The interface is very easy to get around, and all of the options are easy to find.  With a little bit of poking around, you’ll be optimizing Hello World in no time flat! Overall (8/10) – Overall, I am happy with the Performance Profiler and its features, as well as with the service I received when working with the Red Gate personnel.  I WOULD recommend you trying the application and seeing if it would fit into your process, BUT, remember there are still some kinks in it to hopefully be worked out. My next post will definitely be shorter (hopefully), but thank you for reading up to here, or skipping ahead!  Please, if you do try the product, drop me a message and let me know what you think!  I would love to hear any opinions you may have on the product. Code Feel free to download the code I used above – download via DropBox

    Read the article

  • Why does TheyWorkForYou (TWFY) web API always returns '{}'

    - by Tartley
    I'm calling a web API exposed by TheyWorkForYou (TWFI). http://www.theyworkforyou.com/api/ I'm using the Python bindings provided by twfython: http://code.google.com/p/twfython/ I wrote some code to call this API a few months ago, at which time it worked fine. But now I dig it out to run it again, no matter what query I ask of the API, it always returns '{}' (an empty dictionary). For example the following code, which should return a list of all MPs: from twfy import TWFY API_KEY = 'XXXXXXXXXXXXXXXXXXXXXX' twfy = TWFY.TWFY(API_KEY) print twfy.api.getMPs(output='js') Am I being really dumb? What else should I check?

    Read the article

  • Using named pipes in Rails

    - by FancyDancy
    I need to read and write some data through named pipes. I have tested it in a simple Ruby app, and it works nice. But I dont know, where should i put it in my Rails app? I have heard about Rake tasks, but i don't sure, is it right solution. I need to open a pipe-file, and listen to data. If there is any, i need to process it and make a DB-query. Then, write some data to another pipe. I know, how it works, but the only problem - how to run it with Rails? Give me some examples, please.

    Read the article

  • what to do when Bing api provides inaccurate results

    - by hao
    I am trying to use the bing Phonebook search for locations in China, but all the latitude and longitude are inaccurate. Using the following http://api.bing.net/xml.aspx?AppId=appid&Query=nike&Sources=Phonebook&Latitude=39.9883699&Longitude=116.3309665&Radius=30.0&Phonebook.Count=10&Phonebook.Offset=30 I get multiple locations with the same latitude and longitude, and the rounding is off as well. The latitude and longitude will always end with either .x00001 or .0. The results from the pho:LocalSerpUrl http://www.bing.com/shenghuo/default.aspx?what=nike&where=&s_cid=ansPhBkYp01&ac=false&FORM=SOAPGN /pho:LocalSerpUrl Is right, but the results from the returned xml is off. Also it seems users outside of China can't hit that url and get the result. So I am wondering how I can contact bing and inquire about this problem

    Read the article

  • Odd SQL Results

    - by Ryan Burnham
    So i have the following query Select id, [First], [Last] , [Business] as contactbusiness, (Case When ([Business] != '' or [Business] is not null) Then [Business] Else 'No Phone Number' END) from contacts The results look like id First Last contactbusiness (No column name) 2 John Smith 3 Sarah Jane 0411 111 222 0411 111 222 6 John Smith 0411 111 111 0411 111 111 8 NULL No Phone Number 11 Ryan B 08 9999 9999 08 9999 9999 14 David F NULL No Phone Number I'd expect record 2 to also show No Phone Number If i change the "[Business] is not null" to [Business] != null then i get the correct results id First Last contactbusiness (No column name) 2 John Smith No Phone Number 3 Sarah Jane 0411 111 222 0411 111 222 6 John Smith 0411 111 111 0411 111 111 8 NULL No Phone Number 11 Ryan B 08 9999 9999 08 9999 9999 14 David F NULL No Phone Number Normally you need to use is not null rather than != null. whats going on here?

    Read the article

  • SQL Server 2008 ContainsTable, CTE, and Paging

    - by David Murdoch
    I'd like to perform efficient paging using containstable. The following query selects the top 10 ranked results from my database using containstable when searching for a name (first or last) that begins with "Joh". DECLARE @Limit int; SET @Limit = 10; SELECT TOP @Limit c.ChildID, c.PersonID, c.DOB, c.Gender FROM [Person].[vFullName] AS v INNER JOIN CONTAINSTABLE( [Person].[vFullName], (FullName), IS ABOUT ( "Joh*" WEIGHT (.4), "Joh" WEIGHT (.6)) ) AS k3 ON v.PersonID = k3.[KEY] JOIN [Child].[Details] c ON c.PersonID = v.PersonID JOIN [Person].[Details] p ON p.PersonID = c.PersonID ORDER BY k3.RANK DESC, FullName ASC, p.Active DESC, c.ChildID ASC I'd like to combine it with the following CTE which returns the 10th-20th results ordered by ChildID (the primary key): DECLARE @Start int; DECLARE @Limit int; SET @Start = 10; SET @Limit = 10; WITH ChildEntities AS ( SELECT ROW_NUMBER() OVER (ORDER BY ChildID) AS Row, ChildID FROM Child.Details ) SELECT c.ChildID, c.PersonID, c.DOB, c.Gender FROM ChildEntities cte INNER JOIN Child.Details c ON cte.ChildID = c.ChildID WHERE cte.Row BETWEEN @Start+1 AND @Start+@Limit ORDER BY cte.Row ASC

    Read the article

  • Storing user info in Session using an Object vs. normal variables

    - by justinl
    I'm in the process of implementing a user authentication system for my website. I'm using an open source library that maintains user information by creating a User object and storing that object inside my php SESSION variable. Is this the best way to store and access that information? I find it a bit of a hassle to access the user variables because I have to create an object to access them first: $userObj = $_SESSION['userObject']; $userObj->userId; instead of just accessing the user id like this how I would usually store the user ID: $_SESSION['userId']; Is there an advantage to storing a bunch of user data as an object instead of just storing them as individual SESSION variables? ps - The library also seems to store a handful of variables inside the user object (id, username, date joined, email, last user db query) but I really don't care to have all that information stored in my session. I only really want to keep the user id and username.

    Read the article

  • Retreive a value inside a stored procedure and use it inside that stored procedure

    - by sai
    delimiter // CREATE DEFINER=root@localhost PROCEDUREgetData(IN templateName VARCHAR(45),IN templateVersion VARCHAR(45),IN userId VARCHAR(45)) BEGIN set @version = CONCAT("SELECT saveOEMsData_answersVersion FROMsaveOEMsData WHERE saveOEMsData_templateName = '",templateName,"' ANDsaveOEMsData_templateVersion = ",templateVersion," AND saveOEMsData_userId= ",userId); PREPARE s1 from @version; EXECUTE S1; END // delimiter ; I am retreiving saveOEMsData_answersVersion, but I have to use it in an IF loop, as in if the version == 1, then I would use a query, else I would use something else. But I am not able to use the version. Could someone help with this?? I am only able to print but not able to use the version for data manipulation. The procedure works fine but I am unable to proceed to next step which is the if condition. The if condition would have something like the below mentioned. IF(ver == 1) THEN SELECT "1"; END IF;

    Read the article

  • Selecting by observation in R table

    - by Stedy
    I was working through the Rosetta Code example of the knapsack problem in R and I came up with four solutions. What is the best way to output only one of the solutions based on the observation number give in the table? > values[values$value==max(values$value),] I II III value weight volume 2067 9 0 11 54500 25 25 2119 6 5 11 54500 25 25 2171 3 10 11 54500 25 25 2223 0 15 11 54500 25 25 My initial idea was to save this output to a new variable, then query the new variable such as: > newvariable <- values[values$value==max(values$value),] > newvariable2 <- newvariable[2,] > newvariable2 I II III value weight volume 2119 6 5 11 54500 25 25 This gives me the desired output but is there a cleaner way to do it?

    Read the article

  • Select all links from a Html table using XPath (and HtmlAgilityPack)

    - by Adam Asham
    What I am trying to achieve is to extract all links with a href attribute that starts with http://, https:// or /. These links lie within a table (tbody tr td etc) with a certain class. I thought I could specify just the the a element without the whole path to it but it does not seem to work. I get a NullReferenceException at the line that selects the links: var table = doc.DocumentNode.SelectSingleNode("//table[@class='containerTable']"); if (table != null) { foreach (HtmlNode item in table.SelectNodes("a[starts-with(@href, 'https://')]")) { //not working I don't know about any recommendations or best practices when it comes to XPath. Do I create overhead when I query the document two times?

    Read the article

  • Getting Phing's dbdeploy task to automatically rollback on delta error

    - by Gordon
    I am using Phing's dbdeploy task to manage my database schema. This is working fine, as long as there is no errors in the queries of my delta files. However, if there is an error, dbdeploy will just run the delta files up to the query with the error and then abort. This causes me some frustration, because I have to manually rollback the entry in the changelog table then. If I don't, dbdeploy will assume the migration was successful on a subsequent try. So the question is, is there any way to get dbdeploy use transactions or can you suggest any other way to have phing rollback automatically when an error occurs? Note: I'm not that proficient with Phing, so if this involves writing a custom task, any example code or a url with further information is highly appreciated. Thanks

    Read the article

  • Hierarchy based aggregation

    - by Ganapathy Subramaniam
    I have a hierarchy table in SQL Server 2005 which contains employees - managers - department - location - state. Sample table for hierarchy table: ID Name ParentID Type 1 PA NULL 0 (group) 2 Pittsburgh 1 1 (subgroup) 3 Accounts 2 1 4 Alex 3 2 (employee) 5 Robin 3 2 6 HR 2 1 7 Robert 6 2 Second one is fact table which contains employee salary details ID and Salary. Sample data for fact table: ID Salary 4 6000 5 5000 7 4000 Is there any good to way to display the hierarchy from hierarchy table with aggregated sum of salary based on employees. Expected result is like Name Salary PA 15000 (Pittsburgh + others(if any)) Pittusburgh 15000 (Accounts + HR) Accounts 11000 (Alex + Robin) Alex 6000 (direct values) Robin 5000 HR 4000 Robert 4000 In my production environment, hierarchy table may contain 23000+ rows and fact table may contain 300,000+ rows. So, I thought of providing any level of groupid to the query to retrieve just its children and its corresponding aggregated value. Any better solution?

    Read the article

  • JPQL / SQL: How to select * from a table with group by on a single column?

    - by DavidD
    I would like to select every column of a table, but want to have distinct values on a single attribute of my rows (City in the example). I don't want extra columns like counts or anything, just a limited number of results, and it seems like it is not possible to directly LIMIT results in a JPQL query. Original table: ID | Name | City --------------------------- 1 | John | NY 2 | Maria | LA 3 | John | LA 4 | Albert | NY Wanted result, if I do the distinct on the City: ID | Name | City --------------------------- 1 | John | NY 2 | Maria | LA What is the best way to do that? Thank you for your help.

    Read the article

  • Switching from LinqToXYZ to LinqToObjects

    - by spender
    In answering this question, it got me thinking... I often use this pattern: collectionofsomestuff //here it's LinqToEntities .Select(something=>new{something.Name,something.SomeGuid}) .ToArray() //From here on it's LinqToObjects .Select(s=>new SelectListItem() { Text = s.Name, Value = s.SomeGuid.ToString(), Selected = false }) Perhaps I'd split it over a couple of lines, but essentially, at the ToArray point, I'm effectively enumerating my query and storing the resulting sequence so that I can further process it with all the goodness of a full CLR to hand. As I have no interest in any kind of manipulation of the intermediate list, I use ToArray over ToList as there's less overhead. I do this all the time, but I wonder if there is a better pattern for this kind of problem?

    Read the article

  • ASP.NET url MAX_PATH limit

    - by Greg Ballard
    Hi, I've found an issue with ASP.NET that I know at least has stumped one other person out there. We were trying to use an HttpModule to handle wildcard requests to a web application. The generated url is dynamic and could potentially be several hundred characters long. Unfortunately there appears to be a limitation in the aspnet_isapi.dll file that limits the length of the path in the url to MAX_PATH which is hardcoded at 260 chars. Has anyone else ran into this and found a way around this limit? Query string parameters are not an option. Thanks, Greg Ballard

    Read the article

  • Are the svn ruby bindings provided as a gem?

    - by user30997
    I see a couple dozen gems that relate to svn, but what little documentation I can find on any of them shows that they are command-line wrappers and misc helpers. (svn-command, svn-hooks, etc.) I've seen code in the wild that does things like: require 'svn/core' and SVN.Repos.add(...), but the author of that module pulled his svn ruby tools via apt-get. This would not be an option for me, as I'm developing a windows/osx tool. Which gem am I after? From there, I'm happy to dig through code in lieu of documents, but with a call to gem query --name-matches svn --remote returning about 30 hits, I need to narrow it down a bit first.

    Read the article

  • MySql timeouts - Should I set autoReconnect=true in Spring application?

    - by George
    After periods of inactivity on my website (Using Spring 2.5 and MySql), I get the following error: org.springframework.dao.RecoverableDataAccessException: The last packet sent successfully to the server was 52,847,830 milliseconds ago. is longer than the server configured value of 'wait_timeout'. You should consider either expiring and/or testing connection validity before use in your application, increasing the server configured values for client timeouts, or using the Connector/J connection property 'autoReconnect=true' to avoid this problem. According to this question, and the linked bug, I shouldn't just set autoReconnect=true. Does this mean I have to catch this exception on any queries I do and then retry the transaction? Should that logic be in the data access layer, or the model layer? Is there an easy way to handle this instead of wrapping every single query to catch this?

    Read the article

  • [ASP.NET] Odd HttpRequest behaviour

    - by barguast
    I have a web service which runs with a HttpHandler class. In this class, I inspect the request stream for form / query string parameters. In some circumstances, it seemed as though these parameters weren't getting through. After a bit of digging around, I came across some behaviour I don't quite understand. See below: // The request contains 'a=1&b=2&c=3' // TEST ONLY: Read the entire request string contents; using (StreamReader sr = new StreamReader(context.Request.InputStream)) { contents = sr.ReadToEnd(); } // Here 'contents' is usually correct - containing 'a=1&b=2&c=3'. Sometimes it is empty. string a = context.Request["a"]; // Here, a = null, regardless of whether the 'contents' variable above is correct Can anyone explain to me why this might be happening? I'm using a .NET WebClient and UploadDataAsync to perform the request on the client if that makes any difference. If you need any more information, please let me know.

    Read the article

< Previous Page | 522 523 524 525 526 527 528 529 530 531 532 533  | Next Page >