Search Results

Search found 20224 results on 809 pages for 'query optimization'.

Page 171/809 | < Previous Page | 167 168 169 170 171 172 173 174 175 176 177 178  | Next Page >

  • Doing a generic <sql:query> in Grails

    - by melling
    This is a generic way to select data from a table and show the results in an HTML table using JSP taglibs. What is the generic way to do this in Grails? That is, take a few lines of SQL and generate an HTML table from scratch in Grails, including the column names as headers. <sql:query var="results" dataSource="${dsource}" select * from foo </sql:query (# of rows: ${results.rowCount}) <table border="1" <!-- column headers -- <tr bgcolor=cyan <c:forEach var="columnName" items="${results.columnNames}" <th<c:out value="${columnName}"/</th </c:forEach </tr <!-- column data -- <c:forEach var="row" items="${results.rowsByIndex}" <tr <c:forEach var="column" items="${row}" <td<c:out value="${column}"/</td </c:forEach </tr </c:forEach </table The solution to this was answered in another StackOverFlow question. http://stackoverflow.com/questions/425294/sql-database-views-in-grails IF SOMEONE WRITES A GOOD ANSWER, I'LL ACCEPT IT. I would like a 100% acceptance on all of my questions.

    Read the article

  • Create a SQL query to retrieve most recent records

    - by mattruma
    I am creating a status board module for my project team. The status board allows the user to to set their status as in or out and they can also provide a note. I was planning on storing all the information in a single table ... and example of the data follows: Date User Status Notes ------------------------------------------------------- 1/8/2009 12:00pm B.Sisko In Out to lunch 1/8/2009 8:00am B.Sisko In 1/7/2009 5:00pm B.Sisko In 1/7/2009 8:00am B.Sisko In 1/7/2009 8:00am K.Janeway In 1/5/2009 8:00am K.Janeway In 1/1/2009 8:00am J.Picard Out Vacation I would like to query the data and return the most recent status for each user, in this case, my query would return the following results: Date User Status Notes ------------------------------------------------------- 1/8/2009 12:00pm B.Sisko In Out to lunch 1/7/2009 8:00am K.Janeway In 1/1/2009 8:00am J.Picard Out Vacation I am try to figure out the TRANSACT-SQL to make this happen? Any help would be appreciated.

    Read the article

  • explicit copy constructor or implicit parameter by value

    - by R Samuel Klatchko
    I recently read (and unfortunately forgot where), that the best way to write operator= is like this: foo &operator=(foo other) { swap(*this, other); return *this; } instead of this: foo &operator=(const foo &other) { foo copy(other); swap(*this, copy); return *this; } The idea is that if operator= is called with an rvalue, the first version can optimize away construction of a copy. So when called with a rvalue, the first version is faster and when called with an lvalue the two are equivalent. I'm curious as to what other people think about this? Would people avoid the first version because of lack of explicitness? Am I correct that the first version can be better and can never be worse?

    Read the article

  • How to optimize neural network by using genetic algorithm?

    - by Billy Coen
    I'm quite new with this topic so any help would be great. What i need is to optimize a neural network in MATLAB by using GA. My network has [2x98] input and [1x98] target, i've tried consulting matlab help but im still kind of clueless about what to do :( so, any help would be appreciated. Thanks in advance. edit: i guess i didn't say what is there to be optimized as Dan said in the 1st answer. I guess most important thing is number of hidden neurons. And maybe number of hidden layers and training parameters like number of epochs or so. Sorry for not providing enough info, i'm still learning about this.

    Read the article

  • Whats faster in Javascript a bunch of small setInterval loops, or one big one?

    - by RobertWHurst
    Just wondering if its worth it to make a monolithic loop function or just add loops were they're needed. The big loop option would just be a loop of callbacks that are added dynamically with an add function. adding a function would look like this setLoop(function(){ alert('hahaha! I\'m a really annoying loop that bugs you every tenth of a second'); }); setLoop would add the function to the monolithic loop. so is the is worth anything in performance or should I just stick to lots of little loops using setInterval?

    Read the article

  • mysql query: SELECT DISTINCT column1, GROUP BY column2

    - by Adam
    Right now I have the following query: SELECT name, COUNT(name), time, price, ip, SUM(price) FROM tablename WHERE time >= $yesterday AND time <$today GROUP BY name And what I'd like to do is add a DISTINCT by column 'ip', i.e. SELECT DISTINCT ip FROM tablename So my final output would be all the columns, from all the rows that where time is today, grouped by name (with name count for each repeating name) and no duplicate ip addresses. What should my query look like? (or alternatively, how can I add the missing filter to the output with php)? Thanks in advance.

    Read the article

  • Optimizing a Soundex Query for finding similar names

    - by xkingpin
    My application will offer a list of suggestions for English names that "sound like" a given typed name. The query will need to be optimized and return results as quick as possible. Which option would be most optimal for returning results quickly. (Or your own suggestion if you have one) A. Generate the Soundex Hash and store it in the "Names" table then do something like the following: (This saves generating the soundex hash for at least every row in my db per query right?) select name from names where NameSoundex = Soundex('Ann') B. Use the Difference function (This must generate the soundex for every name in the table?) select name from names where Difference(name, 'Ann') = 3 C. Simple comparison select name from names where Soundex(name) = Soundex('Ann') Option A seems like to me it would be the fastest to return results because it only generates the Soundex for one string then compares to an indexed column "NameSoundex" Option B should give more results than option A because the name does not have to be an exact match of the soundex, but could be slower Assuming my table could contain millions of rows, what would yield the best results?

    Read the article

  • cached schwartzian transform

    - by davidk01
    I'm going through "Intermediate Perl" and it's pretty cool. I just finished the section on "The Schwartzian Transform" and after it sunk in I started to wonder why the transform doesn't use a cache. In lists that have several repeated values the transform recomputes the value for each one so I thought why not use a hash to cache results. Here' some code: # a place to keep our results my %cache; # the transformation we are interested in sub foo { # expensive operations } # some data my @unsorted_list = ....; # sorting with the help of the cache my @sorted_list = sort { ($cache{$a} or $cache{$a} = &foo($a)) <=> ($cache{$b} or $cache{$b} = &foo($b)) } @unsorted_list; Am I missing something? Why isn't the cached version of the Schwartzian transform listed in books and in general just better circulated because on first glance I think the cached version should be more efficient?

    Read the article

  • Import Namespace System.Query

    - by GateKiller
    I am trying to load Linq on my .Net 3.5 enabled web server by adding the following to my .aspx page: <%@ Import Namespace="System.Query" %> However, this fails and tells me it cannot find the namespace. The type or namespace name 'Query' does not exist in the namespace 'System' I have also tried with no luck: System.Data.Linq System.Linq System.Xml.Linq I believe that .Net 3.5 is working because var hello = "Hello World" seems to work. Can anyone help please? Cheers, Stephen PS: I just want to clarify that I don't use Visual Studio, I simple have a Text Editor and write my code directly into .aspx files.

    Read the article

  • Access cost of dynamically created objects with dynamically allocated members

    - by user343547
    I'm building an application which will have dynamic allocated objects of type A each with a dynamically allocated member (v) similar to the below class class A { int a; int b; int* v; }; where: The memory for v will be allocated in the constructor. v will be allocated once when an object of type A is created and will never need to be resized. The size of v will vary across all instances of A. The application will potentially have a huge number of such objects and mostly need to stream a large number of these objects through the CPU but only need to perform very simple computations on the members variables. Could having v dynamically allocated could mean that an instance of A and its member v are not located together in memory? What tools and techniques can be used to test if this fragmentation is a performance bottleneck? If such fragmentation is a performance issue, are there any techniques that could allow A and v to allocated in a continuous region of memory? Or are there any techniques to aid memory access such as pre-fetching scheme? for example get an object of type A operate on the other member variables whilst pre-fetching v. If the size of v or an acceptable maximum size could be known at compile time would replacing v with a fixed sized array like int v[max_length] lead to better performance? The target platforms are standard desktop machines with x86/AMD64 processors, Windows or Linux OSes and compiled using either GCC or MSVC compilers.

    Read the article

  • Benefits of 'Optimize code' option in Visual Studio build

    - by gt
    Much of our C# release code is built with the 'Optimize code' option turned off. I believe this is to allow code built in Release mode to be debugged more easily. Given that we are creating fairly simple desktop software which connects to backend Web Services, (ie. not a particularly processor-intensive application) then what if any sort of performance hit might be expected? And is any particular platform likely to be worse affected? Eg. multi-processor / 64 bit.

    Read the article

  • Grand Central Strategy for Opening Multiple Files

    - by user276632
    I have a working implementation using Grand Central dispatch queues that (1) opens a file and computes an OpenSSL DSA hash on "queue1", (2) writing out the hash to a new "side car" file for later verification on "queue2". I would like to open multiple files at the same time, but based on some logic that doesn't "choke" the OS by having 100s of files open and exceeding the hard drive's sustainable output. Photo browsing applications such as iPhoto or Aperture seem to open multiple files and display them, so I'm assuming this can be done. I'm assuming the biggest limitation will be disk I/O, as the application can (in theory) read and write multiple files simultaneously. Any suggestions? TIA

    Read the article

  • gcc memory alignment pragma

    - by aaa
    hello. Does gcc have memory alignment pragma, akin #pragma vector aligned in Intel compiler? I would like to tell compiler to optimize particular loop using aligned loads/store instructions. Thanks

    Read the article

  • Random Complete System Unresponsiveness Running Mathematical Functions

    - by Computer Guru
    I have a program that loads a file (anywhere from 10MB to 5GB) a chunk at a time (ReadFile), and for each chunk performs a set of mathematical operations (basically calculates the hash). After calculating the hash, it stores info about the chunk in an STL map (basically <chunkID, hash>) and then writes the chunk itself to another file (WriteFile). That's all it does. This program will cause certain PCs to choke and die. The mouse begins to stutter, the task manager takes 2 min to show, ctrl+alt+del is unresponsive, running programs are slow.... the works. I've done literally everything I can think of to optimize the program, and have triple-checked all objects. What I've done: Tried different (less intensive) hashing algorithms. Switched all allocations to nedmalloc instead of the default new operator Switched from stl::map to unordered_set, found the performance to still be abysmal, so I switched again to Google's dense_hash_map. Converted all objects to store pointers to objects instead of the objects themselves. Caching all Read and Write operations. Instead of reading a 16k chunk of the file and performing the math on it, I read 4MB into a buffer and read 16k chunks from there instead. Same for all write operations - they are coalesced into 4MB blocks before being written to disk. Run extensive profiling with Visual Studio 2010, AMD Code Analyst, and perfmon. Set the thread priority to THREAD_MODE_BACKGROUND_BEGIN Set the thread priority to THREAD_PRIORITY_IDLE Added a Sleep(100) call after every loop. Even after all this, the application still results in a system-wide hang on certain machines under certain circumstances. Perfmon and Process Explorer show minimal CPU usage (with the sleep), no constant reads/writes from disk, few hard pagefaults (and only ~30k pagefaults in the lifetime of the application on a 5GB input file), little virtual memory (never more than 150MB), no leaked handles, no memory leaks. The machines I've tested it on run Windows XP - Windows 7, x86 and x64 versions included. None have less than 2GB RAM, though the problem is always exacerbated under lower memory conditions. I'm at a loss as to what to do next. I don't know what's causing it - I'm torn between CPU or Memory as the culprit. CPU because without the sleep and under different thread priorities the system performances changes noticeably. Memory because there's a huge difference in how often the issue occurs when using unordered_set vs Google's dense_hash_map. What's really weird? Obviously, the NT kernel design is supposed to prevent this sort of behavior from ever occurring (a user-mode application driving the system to this sort of extreme poor performance!?)..... but when I compile the code and run it on OS X or Linux (it's fairly standard C++ throughout) it performs excellently even on poor machines with little RAM and weaker CPUs. What am I supposed to do next? How do I know what the hell it is that Windows is doing behind the scenes that's killing system performance, when all the indicators are that the application itself isn't doing anything extreme? Any advice would be most welcome.

    Read the article

  • What is the best algorithm for this array-comparison problem?

    - by mark
    What is the most efficient for speed algorithm to solve the following problem? Given 6 arrays, D1,D2,D3,D4,D5 and D6 each containing 6 numbers like: D1[0] = number D2[0] = number ...... D6[0] = number D1[1] = another number D2[1] = another number .... ..... .... ...... .... D1[5] = yet another number .... ...... .... Given a second array ST1, containing 1 number: ST1[0] = 6 Given a third array ans, containing 6 numbers: ans[0] = 3, ans[1] = 4, ans[2] = 5, ......ans[5] = 8 Using as index for the arrays D1,D2,D3,D4,D5 and D6, the number that goes from 0, to the number stored in ST1[0] minus one, in this example 6, so from 0 to 6-1, compare each res array against each D array My algorithm so far is: I tried to keep everything unlooped as much as possible. EML := ST1[0] //number contained in ST1[0] EML1 := 0 //start index for the arrays D While EML1 < EML if D1[ELM1] = ans[0] goto two if D2[ELM1] = ans[0] goto two if D3[ELM1] = ans[0] goto two if D4[ELM1] = ans[0] goto two if D5[ELM1] = ans[0] goto two if D6[ELM1] = ans[0] goto two ELM1 = ELM1 + 1 return 0 //If the ans[0] number is not found in either D1[0-6], D2[0-6].... D6[0-6] return 0 which will then exclude ans[0-6] numbers two: EML1 := 0 start index for arrays Ds While EML1 < EML if D1[ELM1] = ans[1] goto three if D2[ELM1] = ans[1] goto three if D3[ELM1] = ans[1] goto three if D4[ELM1] = ans[1] goto three if D5[ELM1] = ans[1] goto three if D6[ELM1] = ans[1] goto three ELM1 = ELM1 + 1 return 0 //If the ans[1] number is not found in either D1[0-6], D2[0-6].... D6[0-6] return 0 which will then exclude ans[0-6] numbers three: EML1 := 0 start index for arrays Ds While EML1 < EML if D1[ELM1] = ans[2] goto four if D2[ELM1] = ans[2] goto four if D3[ELM1] = ans[2] goto four if D4[ELM1] = ans[2] goto four if D5[ELM1] = ans[2] goto four if D6[ELM1] = ans[2] goto four ELM1 = ELM1 + 1 return 0 //If the ans[2] number is not found in either D1[0-6], D2[0-6].... D6[0-6] return 0 which will then exclude ans[0-6] numbers four: EML1 := 0 start index for arrays Ds While EML1 < EML if D1[ELM1] = ans[3] goto five if D2[ELM1] = ans[3] goto five if D3[ELM1] = ans[3] goto five if D4[ELM1] = ans[3] goto five if D5[ELM1] = ans[3] goto five if D6[ELM1] = ans[3] goto five ELM1 = ELM1 + 1 return 0 //If the ans[3] number is not found in either D1[0-6], D2[0-6].... D6[0-6] return 0 which will then exclude ans[0-6] numbers five: EML1 := 0 start index for arrays Ds While EML1 < EML if D1[ELM1] = ans[4] goto six if D2[ELM1] = ans[4] goto six if D3[ELM1] = ans[4] goto six if D4[ELM1] = ans[4] goto six if D5[ELM1] = ans[4] goto six if D6[ELM1] = ans[4] goto six ELM1 = ELM1 + 1 return 0 //If the ans[4] number is not found in either D1[0-6], D2[0-6].... D6[0-6] return 0 which will then exclude ans[0-6] numbers six: EML1 := 0 start index for arrays Ds While EML1 < EML if D1[ELM1] = ans[5] return 1 ////If the ans[1] number is not found in either D1[0-6]..... if D2[ELM1] = ans[5] return 1 which will then include ans[0-6] numbers return 1 if D3[ELM1] = ans[5] return 1 if D4[ELM1] = ans[5] return 1 if D5[ELM1] = ans[5] return 1 if D6[ELM1] = ans[5] return 1 ELM1 = ELM1 + 1 return 0 As language of choice, it would be pure c

    Read the article

  • Is it possible to do A/B testing by page rather than by individual?

    - by mojones
    Lets say I have a simple ecommerce site that sells 100 different t-shirt designs. I want to do some a/b testing to optimise my sales. Let's say I want to test two different "buy" buttons. Normally, I would use AB testing to randomly assign each visitor to see button A or button B (and try to ensure that that the user experience is consistent by storing that assignment in session, cookies etc). Would it be possible to take a different approach and instead, randomly assign each of my 100 designs to use button A or B, and measure the conversion rate as (number of sales of design n) / (pageviews of design n) This approach would seem to have some advantages; I would not have to worry about keeping the user experience consistent - a given page (e.g. www.example.com/viewdesign?id=6) would always return the same html. If I were to test different prices, it would be far less distressing to the user to see different prices for different designs than different prices for the same design on different computers. I also wonder whether it might be better for SEO - my suspicion is that Google would "prefer" that it always sees the same html when crawling a page. Obviously this approach would only be suitable for a limited number of sites; I was just wondering if anyone has tried it?

    Read the article

  • How does loop address alignment affect the speed on Intel x86_64?

    - by Alexander Gololobov
    I'm seeing 15% performance degradation of the same C++ code compiled to exactly same machine instructions but located on differently aligned addresses. When my tiny main loop starts at 0x415220 it's faster then when it is at 0x415250. I'm running this on Intel Core2 Duo. I use gcc 4.4.5 on x86_64 Ubuntu. Can anybody explain the cause of slowdown and how I can force gcc to optimally align the loop? Here is the disassembly for both cases with profiler annotation: 415220 576 12.56% |XXXXXXXXXXXXXX 48 c1 eb 08 shr $0x8,%rbx 415224 110 2.40% |XX 0f b6 c3 movzbl %bl,%eax 415227 0.00% | 41 0f b6 04 00 movzbl (%r8,%rax,1),%eax 41522c 40 0.87% | 48 8b 04 c1 mov (%rcx,%rax,8),%rax 415230 806 17.58% |XXXXXXXXXXXXXXXXXXX 4c 63 f8 movslq %eax,%r15 415233 186 4.06% |XXXX 48 c1 e8 20 shr $0x20,%rax 415237 102 2.22% |XX 4c 01 f9 add %r15,%rcx 41523a 414 9.03% |XXXXXXXXXX a8 0f test $0xf,%al 41523c 680 14.83% |XXXXXXXXXXXXXXXX 74 45 je 415283 ::Run(char const*, char const*)+0x4b3 41523e 0.00% | 41 89 c7 mov %eax,%r15d 415241 0.00% | 41 83 e7 01 and $0x1,%r15d 415245 0.00% | 41 83 ff 01 cmp $0x1,%r15d 415249 0.00% | 41 89 c7 mov %eax,%r15d 415250 679 13.05% |XXXXXXXXXXXXXXXX 48 c1 eb 08 shr $0x8,%rbx 415254 124 2.38% |XX 0f b6 c3 movzbl %bl,%eax 415257 0.00% | 41 0f b6 04 00 movzbl (%r8,%rax,1),%eax 41525c 43 0.83% |X 48 8b 04 c1 mov (%rcx,%rax,8),%rax 415260 828 15.91% |XXXXXXXXXXXXXXXXXXX 4c 63 f8 movslq %eax,%r15 415263 388 7.46% |XXXXXXXXX 48 c1 e8 20 shr $0x20,%rax 415267 141 2.71% |XXX 4c 01 f9 add %r15,%rcx 41526a 634 12.18% |XXXXXXXXXXXXXXX a8 0f test $0xf,%al 41526c 749 14.39% |XXXXXXXXXXXXXXXXXX 74 45 je 4152b3 ::Run(char const*, char const*)+0x4c3 41526e 0.00% | 41 89 c7 mov %eax,%r15d 415271 0.00% | 41 83 e7 01 and $0x1,%r15d 415275 0.00% | 41 83 ff 01 cmp $0x1,%r15d 415279 0.00% | 41 89 c7 mov %eax,%r15d

    Read the article

  • How to use correctly the Query Window in SQL Server 2008

    - by Richard77
    Hello, What should I do to avoid that commands be executed each time I hit 'Execute !. icon' I mean this USE master; GO CREATE DATABASE Sales GO USE Sales; GO CREATE TABLE Customers( CustomerID int NOT NULL, LName varchar (50) NOT NULL, FName varchar (50) NULL, Status varchar (10), ModifiedBy varchar (30) NULL ) GO When I click Execute!, Sql Server tries to redo the same thing. What I do for now is to delete the Query Window completely then write what I need before clicking the Execute icon. But, I doubt that I should be doing that. What can I do to keep writing the commands without having each time to clear the Query Window? Thanks for helping

    Read the article

  • Jruby rspec to be run parallely

    - by Priyank
    Hi. Is there something like Spork for Jruby too? We want to parallelize our specs to run faster and pre-load the classes while running the rake task; however we have not been able to do so. Since our project is considerable in size, specs take about 15 minutes to complete and this poses a serious challenge to quick turnaround. Any ideas are more than welcome. Cheers

    Read the article

  • Linq query with subquery as comma-separated values

    - by Keith
    In my application, a company can have many employees and each employee may have have multiple email addresses. The database schema relates the tables like this: Company - CompanyEmployeeXref - Employee - EmployeeAddressXref - Email I am using Entity Framework and I want to create a LINQ query that returns the name of the company and a comma-separated list of it's employee's email addresses. Here is the query I am attempting: from c in Company join ex in CompanyEmployeeXref on c.Id equals ex.CompanyId join e in Employee on ex.EmployeeId equals e.Id join ax in EmployeeAddressXref on e.Id equals ax.EmployeeId join a in Address on ax.AddressId equals a.Id select new { c.Name, a.Email.Aggregate(x=x + ",") } Desired Output: "Company1", "[email protected],[email protected],[email protected]" "Company2", "[email protected],[email protected],[email protected]" ... I know this code is wrong, I think I'm missing a group by, but it illustrates the point. I'm not sure of the syntax. Is this even possible? Thanks for any help.

    Read the article

  • Complicated idea - how to create car racing for my RPG game's players

    - by Donator
    So, I want to create car racing for my RPG game's players. Player can create race and choose how many participants can participate in race. After race is being created, other people can join it. When the maximum participants are collected, race begins. My idea, when the last participant joins, then instantly choose the winner (who's car is the best, that person wins), but how can I do it? If I choose to pick the winner after the last participant joins, then I have to put many queries in one page (select data from table, then delete the race, then select players' cars' statistics and pick the winner and then again, using mysql, send message to everyone). But this idea is really not optimal and it will lag cruelly for that last person. Maybe you have any ideas how I can avoid lag and make it more optimal. Thank you very much.

    Read the article

  • Memory efficient int-int dict in Python

    - by Bolo
    Hi, I need a memory efficient int-int dict in Python that would support the following operations in O(log n) time: d[k] = v # replace if present v = d[k] # None or a negative number if not present I need to hold ~250M pairs, so it really has to be tight. Do you happen to know a suitable implementation (Python 2.7)? EDIT Removed impossible requirement and other nonsense. Thanks, Craig and Kylotan! To rephrase. Here's a trivial int-int dictionary with 1M pairs: >>> import random, sys >>> from guppy import hpy >>> h = hpy() >>> h.setrelheap() >>> d = {} >>> for _ in xrange(1000000): ... d[random.randint(0, sys.maxint)] = random.randint(0, sys.maxint) ... >>> h.heap() Partition of a set of 1999530 objects. Total size = 49161112 bytes. Index Count % Size % Cumulative % Kind (class / dict of class) 0 1 0 25165960 51 25165960 51 dict (no owner) 1 1999521 100 23994252 49 49160212 100 int On average, a pair of integers uses 49 bytes. Here's an array of 2M integers: >>> import array, random, sys >>> from guppy import hpy >>> h = hpy() >>> h.setrelheap() >>> a = array.array('i') >>> for _ in xrange(2000000): ... a.append(random.randint(0, sys.maxint)) ... >>> h.heap() Partition of a set of 14 objects. Total size = 8001108 bytes. Index Count % Size % Cumulative % Kind (class / dict of class) 0 1 7 8000028 100 8000028 100 array.array On average, a pair of integers uses 8 bytes. I accept that 8 bytes/pair in a dictionary is rather hard to achieve in general. Rephrased question: is there a memory-efficient implementation of int-int dictionary that uses considerably less than 49 bytes/pair?

    Read the article

  • Join Query Help

    - by John
    Hello, The query below works well. It pulls data from two MySQL tables, "submission" and "login." I would like to also pull data from a third table called "comment" in the same database. The table "comment" has the following fields: commentid, loginid, submissionid, comment, datecommented Two of the fields in the table "login" are called "loginid" and "username." In the query below, I would like to count all "commentid" in "comment" where "loginid" equals the "loginid" in "login" where "username" equals "$profile." How can I do this? Thanks in advance, John $sqlStr1 = "SELECT l.username, l.loginid, s.loginid, s.submissionid, s.title, s.url, s.datesubmitted, s.displayurl, l.created, count(s.submissionid) countSubmissions FROM submission AS s INNER JOIN login AS l ON s.loginid = l.loginid WHERE l.username = '$profile'";

    Read the article

< Previous Page | 167 168 169 170 171 172 173 174 175 176 177 178  | Next Page >