Search Results

Search found 6841 results on 274 pages for 'outer join'.

Page 121/274 | < Previous Page | 117 118 119 120 121 122 123 124 125 126 127 128  | Next Page >

  • Most efficient way to write over file after reading

    - by Ryan McClure
    I'm reading in some data from a file, manipulating it, and then overwriting it to the same file. Until now, I've been doing it like so: open (my $inFile, $file) or die "Could not open $file: $!"; $retString .= join ('', <$inFile>); ... close ($inFile); open (my $outFile, $file) or die "Could not open $file: $!"; print $outFile, $retString; close ($inFile); However I realized I can just use the truncate function and open the file for read/write: open (my $inFile, '+<', $file) or die "Could not open $file: $!"; $retString .= join ('', <$inFile>); ... truncate $inFile, 0; print $inFile $retString; close ($inFile); I don't see any examples of this anywhere. It seems to work well, but am I doing it correctly? Is there a better way to do this?

    Read the article

  • Creating a dynamic two-column iOS spinning wheel with HTML/Javascript

    - by JSW189
    I am trying to create a dynamic two-column spinning wheel for iOS Safari using this HTML/Javascript wheel. However, I am having trouble getting the value of the first column to change the results of the second column. I have tried using an if statement to get the value of the first variable (var beverage) and add the value of the second column correspondingly. Does anybody know what I am doing wrong/if there is a better approach? function openBirthDate() { var beverage = { 1:'Coffee', 2:'Soda' }; //THIS IS WHERE I'M HAVING TROUBLE var results = SpinningWheel.getSelectedValues(); if (results.values === 1) { var company = { 1:'Starbucks', 2:'Dunkin Donuts' }; } else { var company = { 1:'Coke', 2:'Pepsi' }; } var size = { 1:'Tall', 2:'Grande', 3:'Venti' }; SpinningWheel.addSlot(type, '', 1); SpinningWheel.addSlot(company, '', 1); SpinningWheel.addSlot(size, '', 1); SpinningWheel.setCancelAction(cancel); SpinningWheel.setDoneAction(done); SpinningWheel.open(); } function done() { var results = SpinningWheel.getSelectedValues(); document.getElementById('result').innerHTML = 'values: ' + results.values.join(' ') + '<br />keys: ' + results.keys.join(', '); } function cancel() { document.getElementById('result').innerHTML = 'cancelled!'; } window.addEventListener('load', function(){ setTimeout(function(){ window.scrollTo(0,0); }, 100); }, true);

    Read the article

  • ASP.NET SqlDataSource update and create FK reference

    - by William
    The short version: I have a grid view bound to a data source which has a SelectCommand with a left join in it because the FK can be null. On Update I want to create a record in the FK table if the FK is null and then update the parent table with the new records ID. Is this possible to do with just SqlDataSources? The detailed version: I have two tables: Company and Address. The column Company.AddressId can be null. On my ascx page I am using a SqlDataSource to select a left join of company and address and a GridView to display the results. By having my UpdateCommand and DeleteCommand of the SqlDataSource execute two statements separated by a semi-colon I am able to use the GridView's Edit and Delete functionality to update both table simultaneously. The problem I have is when the Company.AddressId is null. What I need to have happen is have the data source create a record in the Address table and then update the Company table with the new Address.ID then proceed with the update as usual. I would like to do this with just data sources if possible for consistency/simplicity sake. Is it possible to have my data source do this, or perhaps add a second data source to the page to handle some of this? Once I have that working I can probably figure out how to make it work with the InsertCommand as well but if you are on a roll and have an answer for how to make that fly as well feel free to provide it. Thanks.

    Read the article

  • customer.name joining transactions.name vs. customer.id [serial] joining transactions.id [integer]

    - by Frank Computer
    INFORMIX-SQL 7.32 Pawnshop Application: one-to-many relationship where each customer (master) can have many transactions (detail). customer( id serial, pk_name char(30), {PATERNAL-NAME MATERNAL-NAME, FIRST-NAME MIDDLE-NAME} [...] ); unique index on id; unique cluster index on name; transaction( fk_name char(30), ticket_number serial, [...] ); dups cluster index on fk_name; unique index on ticket_number; Several people have told me this is not the correct way to join master to detail. They said I should always join customer.id[serial] to transactions.id[integer]. When a customer pawns merchandise, clerk queries the master using wildcards on name. The query usually returns several customers, clerk scrolls until locating the right name, enters a 'D' to change to detail transactions table, all transactions are automatically queried, then clerk enters an 'A' to add a new transaction. The problem with using customer.id joining transaction.id is that although the customer table is maintained in sorted name order, clustering the transaction table by fk_id groups the transactions by fk_id, but they are not in the same order as the customer name, so when clerk is scrolling through customer names in the master, the system has to jump allover the place to locate the clustered transactions belonging to each customer. As each new customer is added, the next id is assigned to that customer, but new customers dont show up in alphabetical order. I experimented using id joins and confirmed the decrease in performance. How can I use id joins instead of name joins and still preserve the clustered transaction order by name if transactions has no name column?

    Read the article

  • Passing a WHERE clause for a Linq-to-Sql query as a parameter

    - by Mantorok
    Hi all This is probably pushing the boundaries of Linq-to-Sql a bit but given how versatile it has been so far I thought I'd ask. I have 3 queries that are selecting identical information and only differ in the where clause, now I know I can pass a delegate in but this only allows me to filter the results already returned, but I want to build up the query via parameter to ensure efficiency. Here is the query: from row in DataContext.PublishedEvents join link in DataContext.PublishedEvent_EventDateTimes on row.guid equals link.container join time in DataContext.EventDateTimes on link.item equals time.guid where row.ApprovalStatus == "Approved" && row.EventType == "Event" && time.StartDate <= DateTime.Now.Date.AddDays(1) && (!time.EndDate.HasValue || time.EndDate.Value >= DateTime.Now.Date.AddDays(1)) orderby time.StartDate select new EventDetails { Title = row.EventName, Description = TrimDescription(row.Description) }; The code I want to apply via a parameter would be: time.StartDate <= DateTime.Now.Date.AddDays(1) && (!time.EndDate.HasValue || time.EndDate.Value >= DateTime.Now.Date.AddDays(1)) Is this possible? I don't think it is but thought I'd check out first. Thanks

    Read the article

  • Using function arguments to dynamically generate a query

    - by Varun
    I am working on an issue management system, developed in PHP/MySQL. It requires search functionality, where the user will mention the search parameters and based on these parameters the system will return the result set. To solve this I am trying to write a function and all the user selected parameters are passed as arguments. Based on the arguments I will dynamically generate the query. Sample Query: select * from tickets inner join ticket_assigned_to on tickets.id=ticket_assigned_to.ticket_id where tickets.project_id= in ('') and tickets.status in ('') and ticket_assigned_to.user_id in ('') and tickets.reporter_user_id='' and tickets.operator_user_id in ('') and tickets.due_date between '' and '' and tickets.ts_created between '' and ''; I also need to handle cases where the arguments can be ORed or ANDed in the query. For example: select * from tickets inner join ticket_assigned_to on tickets.id=ticket_assigned_to.ticket_id where tickets.project_id= in ('') and tickets.status in ('') or tickets.due_date = '' or tickets.ts_created between '' and ''; I am also planning to use the same function at other places in the project also. Like to display all the tickets of a user or all tickets created between given dates and so on... How to handle this situation? Should I go with a single function which handles all this or numerous small functions? Need guidance here.

    Read the article

  • get_post_meta return empty string

    - by Jean-philippe Emond
    I guest it is a little issues but I running a SQL to get some post id. $result = $wpdb->get_results("SELECT wppm.post_id FROM wp_postmeta wppm INNER JOIN wp_posts wpp ON wppm.post_id=wpp.ID WHERE wppm.meta_key LIKE 'activity'"); (count: 302) After that, I get all id and I run get_post_meta like that: foreach($result as $id){ $activity = get_post_meta($id); var_dump($activity); foreach($activity as $key=>$value){ if(is_array($value) && $key=="age"){ var_dump($value); } } } (var_dump result: string "") samething if I run with: $activity = get_post_meta($id,'activity',true); Where we need to get a result. What is wrong? Thank you for your help!!! [Bonus Question] If the "activity" meta_key as an array Value. and I get directly like: $result = $wpdb->get_results("SELECT wppm.meta_value FROM wp_postmeta wppm INNER JOIN wp_posts wpp ON wppm.post_id=wpp.ID WHERE wppm.meta_key LIKE 'activity'"); How I parse it? Thanks again!

    Read the article

  • Delivering activity feed items in a moderately scalable way

    - by sotangochips
    The application I'm working on has an activity feed where each user can see their friends' activity (much like Facebook). I'm looking for a moderately scalable way to show a given users' activity stream on the fly. I say 'moderately' because I'm looking to do this with just a database (Postgresql) and maybe memcached. For instance, I want this solution to scale to 200k users each with 100 friends. Currently, there is a master activity table that stores the rendered html for the given activity (Jim added a friend, George installed an application, etc.). This master activity table keeps the source user, the html, and a timestamp. Then, there's a separate ('join') table that simply keeps a pointer to the person who should see this activity in their friend feed, and a pointer to the object in the main activity table. So, if I have 100 friends, and I do 3 activities, then the join table will then grow to 300 items. Clearly this table will grow very quickly. It has the nice property, though, that fetching activity to show to a user takes a single (relatively) inexpensive query. The other option is to just keep the main activity table and query it by saying something like: select * from activity where source_user in (1, 2, 44, 2423, ... my friend list) This has the disadvantage that you're querying for users who may never be active, and as your friend list grows, this query can get slower and slower. I see the pros and the cons of both sides, but I'm wondering if some SO folks might help me weigh the options and suggest one way or they other. I'm also open to other solutions, though I'd like to keep it simple and not install something like CouchDB, etc. Many thanks!

    Read the article

  • Linq and returning types

    - by cdotlister
    My GUI is calling a service project that does some linq work, and returns data to my GUI. However, I am battling with the return type of the method. After some reading, I have this as my method: public static IEnumerable GetDetailedAccounts() { IEnumerable accounts = (from a in Db.accounts join i in Db.financial_institution on a.financial_institution.financial_institution_id equals i.financial_institution_id join acct in Db.z_account_type on a.z_account_type.account_type_id equals acct.account_type_id orderby i.name select new {account_id = a.account_id, name = i.name, description = acct.description}); return accounts; } However, my caller is battling a bit. I think I am screwing up the return type, or not handling the caller well, but it's not working as I'd hoped. This is how I am attempting to call the method from my GUI. IEnumerable accounts = Data.AccountService.GetDetailedAccounts(); Console.ForegroundColor = ConsoleColor.Green; Console.WriteLine("Accounts:"); Console.ForegroundColor = ConsoleColor.White; foreach (var acc in accounts) { Console.WriteLine(string.Format("{0:00} {1}", acc.account_id, acc.name + " " + acc.description)); } int accountid = WaitForKey(); However, my foreach, and the acc - isn't working. acc doesn't know about the name, description and id that I setup in the method. Am I at least close to being right?

    Read the article

  • need help optimizing oracle query

    - by deming
    I need help in optimizing the following query. It is taking a long time to finish. It takes almost 213 seconds . because of some constraints, I can not add an index and have to live with existing ones. INSERT INTO temp_table_1 ( USER_ID, role_id, participant_code, status_id ) WITH A AS (SELECT USER_ID user_id,ROLE_ID, STATUS_ID,participant_code FROM USER_ROLE WHERE participant_code IS NOT NULL), --1 B AS (SELECT ROLE_ID FROM CMP_ROLE WHERE GROUP_ID = 3), C AS (SELECT USER_ID FROM USER) --2 SELECT USER_ID,ROLE_ID,PARTICIPANT_CODE,MAX(STATUS_ID) FROM A INNER JOIN B USING (ROLE_ID) INNER JOIN C USING (USER_ID) GROUP BY USER_ID,role_id,participant_code ; --1 = query when ran alone takes 100+ seconds --2 = query when ran alone takes 19 seconds DELETE temp_table_1 WHERE ROWID NOT IN ( SELECT a.ROWID FROM temp_table_1 a, USER_ROLE b WHERE a.status_id = b.status_id AND ( b.ACTIVE IN ( 1 ) OR ( b.ACTIVE IN ( 0,3 ) AND SYSDATE BETWEEN b.effective_from_date AND b.effective_to_date )) ); It seems like the person who wrote the query is trying to get everything into a temp table first and then deleting records from the temp table. whatever is left is the actual results. Can't it be done such a way that there is no need for the delete? We just get the results needed since that will save time?

    Read the article

  • Aggregate path counts using HierarchyID

    - by austincav
    Business problem - understand process fallout using analytics data. Here is what we have done so far: Build a dictionary table with every possible process step Find each process "start" Find the last step for each start Join dictionary table to last step to find path to final step In the final report output we end up with a list of paths for each start to each final step: User Fallout Step HierarchyID.ToString() A 1/1/1 B 1/1/1/1/1 C 1/1/1/1 D 1/1/1 E 1/1 What this means is that five users (A-E) started the process. Assume only User B finished, the other four did not. Since this is a simple example (without branching) we want the output to look as follows: Step Unique Users 1 5 2 5 3 4 4 2 5 1 The easiest solution I could think of is to take each hierarchyID.ToString(), parse that out into a set of subpaths, JOIN back to the dictionary table, and output using GROUP BY. Given the volume of data, I'd like to use the built-in HierarchyID functions, e.g. IsAncestorOf. Any ideas or thoughts how I could write this? Maybe a recursive CTE?

    Read the article

  • How do you compare using .NET types in an NHibernate ICriteria query for an ICompositeUserType?

    - by gabe
    I have an answered StackOverflow question about how to combine to legacy CHAR database date and time fields into one .NET DateTime property in my POCO here (thanks much Berryl!). Now i am trying to get a custom ICritera query to work against that very DateTime property to no avail. here's my query: ICriteria criteria = Session.CreateCriteria<InputFileLog>() .Add(Expression.Gt(MembersOf<InputFileLog>.GetName(x => x.FileCreationDateTime), DateTime.Now.AddDays(-14))) .AddOrder(Order.Desc(Projections.Id())) .CreateCriteria(typeof(InputFile).Name) .Add(Expression.Eq(MembersOf<InputFile>.GetName(x => x.Id), inputFileName)); IList<InputFileLog> list = criteria.List<InputFileLog>(); And here's the query it's generating: SELECT this_.input_file_token as input1_9_2_, this_.file_creation_date as file2_9_2_, this_.file_creation_time as file3_9_2_, this_.approval_ind as approval4_9_2_, this_.file_id as file5_9_2_, this_.process_name as process6_9_2_, this_.process_status as process7_9_2_, this_.input_file_name as input8_9_2_, gonogo3_.input_file_token as input1_6_0_, gonogo3_.go_nogo_ind as go2_6_0_, inputfile1_.input_file_name as input1_3_1_, inputfile1_.src_code as src2_3_1_, inputfile1_.process_cat_code as process3_3_1_ FROM input_file_log this_ left outer join go_nogo gonogo3_ on this_.input_file_token=gonogo3_.input_file_token inner join input_file inputfile1_ on this_.input_file_name=inputfile1_.input_file_name WHERE this_.file_creation_date > :p0 and this_.file_creation_time > :p1 and inputfile1_.input_file_name = :p2 ORDER BY this_.input_file_token desc; :p0 = '20100401', :p1 = '15:15:27', :p2 = 'LMCONV_JR' The query is exactly what i would expect, actually, except it doesn't actually give me what i want (all the rows in the last 2 weeks) because in the DB it's doing a greater than comparison using CHARs instead of DATEs. I have no idea how to get the query to convert the CHAR values into a DATE in the query without doing a CreateSQLQuery(), which I would like to avoid. Anyone know how to do this?

    Read the article

  • Unable to plot graph using matplotlib

    - by Aman Deep Gautam
    I have the following code which searches all the directory in the current directory and then takes data from those files to plot the graph. The data is read correctly as verified by printing but there are no points plotted on graph. import argparse import os import matplotlib.pyplot as plt #find the present working directory pwd=os.path.dirname(os.path.abspath(__file__)) #find all the folders in the present working directory. dirs = [f for f in os.listdir('.') if os.path.isdir(f)] plt.figure() plt.xlim(0, 20000) plt.ylim(0, 1) for directory in dirs: os.chdir(os.path.join(pwd, directory)); chd_dir = os.path.dirname(os.path.abspath(__file__)) files = [ fl for fl in os.listdir('.') if os.path.isfile(fl) ] print files for f in files: f_obj = open(os.path.join(chd_dir, f), 'r') list_x = [] list_y = [] for i in xrange(0,4): f_obj.next() for line in f_obj: temp_list = line.split() print temp_list list_y.append(temp_list[0]) list_x.append(temp_list[1]) print 'final_lsit' print list_x print list_y plt.plot(list_x, list_y, 'r.') f_obj.close() os.chdir(pwd) plt.savefig("test.jpg") The input files look like the following: 5 865 14709 15573 14709 1.32667e-06 664 0.815601 14719 1.55333e-06 674 0.813277 14729 1.82667e-06 684 0.810185 14739 1.4e-06 694 0.808459 Can anybody help me with why this is happening? Being new I would like to know some tutorial where I can get help with kind of plotting as the tutorial I was following made me end up here. Any help appreciated.

    Read the article

  • Ruby Challenge - efficiently change the last character of every word in a sentence to a capital

    - by emson
    Hi All I recently was challenged to write some Ruby code to change the last character of every word in a sentence into a capital. Such that the string: "script to convert the last letter of every word to a capital" becomes "scripT tO converT thE lasT letteR oF everY worD tO A capitaL" This was my optimal solution however I'm sure you wizards have much better solutions and I would be really interested to hear them. "script to convert the last letter of every word to a capital".split.map{|w|w<<w.slice!(-1).chr.upcase}.join' ' For those interested as to what is going on here is an explanation. split will split the sentence up into an array, the default delimiter is a space and with Ruby you don't need to use brackets here. map the array from split is passed to map which opens a block and process each word (w) in the array. the block slice!(s) off the last character of the word and converts it to a chr (a character not ASCII code) and then capitalises upcase it. This character is now appended << to the word which is missing the sliced last letter. Finally the array of words is now join together with a ' ' to reform the sentence. Enjoy

    Read the article

  • HQL query problem

    - by yigit
    Hi all, I'm using this hql query for my filters. Query perfectly working except width (string) part. Here is the query, public IList<ColorGroup> GetDistinctColorGroups(int typeID, int finishID, string width) { string queryStr = "Select distinct c from ColorGroup c inner join c.Products p " + "where p.ShowOnline = 1 "; if (typeID > 0) queryStr += " and p.ProductType.ID = " + typeID; if (finishID > 0) queryStr += " and p.FinishGroup.ID = " + finishID; if (width != "") queryStr += " and p.Size.Width = " + width; IList<ColorGroup> colors = NHibernateSession.CreateQuery(queryStr).List<ColorGroup>(); return colors; } ProductType and Size have same mappings and relations. This is the error; NHibernate.QueryException: illegal syntax near collection: Size [Select distinct c from .Domain.ColorGroup c inner join c.Products p where p.ShowOnline = 1 and p.ProductType.ID = 1 and p.FinishGroup.ID = 5 and p.Size.Width = 4] Any ideas ?

    Read the article

  • Optimization of Function with Dictionary and Zip()

    - by eWizardII
    Hello, I have the following function: def filetxt(): word_freq = {} lvl1 = [] lvl2 = [] total_t = 0 users = 0 text = [] for l in range(0,500): # Open File if os.path.exists("C:/Twitter/json/user_" + str(l) + ".json") == True: with open("C:/Twitter/json/user_" + str(l) + ".json", "r") as f: text_f = json.load(f) users = users + 1 for i in range(len(text_f)): text.append(text_f[str(i)]['text']) total_t = total_t + 1 else: pass # Filter occ = 0 import string for i in range(len(text)): s = text[i] # Sample string a = re.findall(r'(RT)',s) b = re.findall(r'(@)',s) occ = len(a) + len(b) + occ s = s.encode('utf-8') out = s.translate(string.maketrans("",""), string.punctuation) # Create Wordlist/Dictionary word_list = text[i].lower().split(None) for word in word_list: word_freq[word] = word_freq.get(word, 0) + 1 keys = word_freq.keys() numbo = range(1,len(keys)+1) WList = ', '.join(keys) NList = str(numbo).strip('[]') WList = WList.split(", ") NList = NList.split(", ") W2N = dict(zip(WList, NList)) for k in range (0,len(word_list)): word_list[k] = W2N[word_list[k]] for i in range (0,len(word_list)-1): lvl1.append(word_list[i]) lvl2.append(word_list[i+1]) I have used the profiler to find that it seems the greatest CPU time is spent on the zip() function and the join and split parts of the code, I'm looking to see if there is any way I have overlooked that I could potentially clean up the code to make it more optimized, since the greatest lag seems to be in how I am working with the dictionaries and the zip() function. Any help would be appreciated thanks!

    Read the article

  • Python Speeding Up Retrieving data from extremely large string

    - by Burninghelix123
    I have a list I converted to a very very long string as I am trying to edit it, as you can gather it's called tempString. It works as of now it just takes way to long to operate, probably because it is several different regex subs. They are as follow: tempString = ','.join(str(n) for n in coords) tempString = re.sub(',{2,6}', '_', tempString) tempString = re.sub("[^0-9\-\.\_]", ",", tempString) tempString = re.sub(',+', ',', tempString) clean1 = re.findall(('[-+]?[0-9]*\.?[0-9]+,[-+]?[0-9]*\.?[0-9]+,' '[-+]?[0-9]*\.?[0-9]+'), tempString) tempString = '_'.join(str(n) for n in clean1) tempString = re.sub(',', ' ', tempString) Basically it's a long string containing commas and about 1-5 million sets of 4 floats/ints (mixture of both possible),: -5.65500020981,6.88999986649,-0.454999923706,1,,,-5.65500020981,6.95499992371,-0.454999923706,1,,, The 4th number in each set I don't need/want, i'm essentially just trying to split the string into a list with 3 floats in each separated by a space. The above code works flawlessly but as you can imagine is quite time consuming on large strings. I have done a lot of research on here for a solution but they all seem geared towards words, i.e. swapping out one word for another. EDIT: Ok so this is the solution i'm currently using: def getValues(s): output = [] while s: # get the three values you want, discard the 3 commas, and the # remainder of the string v1, v2, v3, _, _, _, s = s.split(',', 6) output.append("%s %s %s" % (v1.strip(), v2.strip(), v3.strip())) return output coords = getValues(tempString) Anyone have any advice to speed this up even farther? After running some tests It still takes much longer than i'm hoping for. I've been glancing at numPy, but I honestly have absolutely no idea how to the above with it, I understand that after the above has been done and the values are cleaned up i could use them more efficiently with numPy, but not sure how NumPy could apply to the above. The above to clean through 50k sets takes around 20 minutes, I cant imagine how long it would be on my full string of 1 million sets. I'ts just surprising that the program that originally exported the data took only around 30 secs for the 1 million sets

    Read the article

  • How many WCF connections can a single host handle?

    - by mafutrct
    I'll try to explain this with an example. I'm writing a chat application. There are users that can join chat rooms. A user has to log in before he can join any room. Currently, there is a single service. A user logs in using this service. Then, the user sends and receives messages for all joined rooms via this single service. channel.Login("Hans Moleman", "password"); channel.JoinRoom("name of room"); channel.SendChat("name of room", "hello"); I'm thinking about changing the design so there is a new WCF connection for each joined room. In the actual app, the number of connections is likely going to be in the range of 10-100, possibly more. Is this a good idea? Or are ~100 connections per client too much? The server should be able to handle many clients (range 100-1000, later up to 10k). In case it matters, I'm using NetTcpBinding.

    Read the article

  • SELECT SUM PHP MySQL problem

    - by user345426
    This is driving me nuts! Below you will find my PHP/MySQL code but I will post the direct mySQL statement here: SELECT SUM( ot.value ) AS msa FROM orders o LEFT JOIN orders_total ot ON ot.orders_id = o.orders_id WHERE ot.class = 'ot_total' AND UNIX_TIMESTAMP( o.date_purchased ) >=1262332800 AND UNIX_TIMESTAMP( o.date_purchased ) <=1264924800 AND o.sales_rep_id = '2' When I execute this statement inside of phpMyAdmin I get the sum for ot.value which is associated to "msa". Although, when I run my php code it does not return a value. Anyone see the problem? // works in phpMyAdmin but not displaying during PHP execution! $monthly_sales_amount_sql = "SELECT SUM(ot.value) AS msa FROM orders o LEFT JOIN orders_total ot ON ot.orders_id = o.orders_id WHERE ot.class = 'ot_total' AND UNIX_TIMESTAMP(o.date_purchased) >= $start_timestamp AND UNIX_TIMESTAMP(o.date_purchased) <= $end_timestamp AND o.sales_rep_id = '" . $sales_rep_id "'"; $result = mysql_query($monthly_sales_amount_sql); $row = mysql_fetch_assoc($result); echo "MSA: " . $row['msa'] . "<BR><BR>";

    Read the article

  • Are there any security issues to avoid when providing a either-email-or-username-can-act-as-username

    - by Tchalvak
    I am in the process of moving from a "username/password" system to one that uses email for login. I don't think that there's any horrible problem with allowing either email or username for login, and I remember seeing sites that I consider somewhat respectable doing it as well, but I'd like to be aware of any major security flaws that I may be introducing. More specifically, here is the pertinent function (the query_row function parameterizes the sql). function authenticate($p_user, $p_pass) { $user = (string)$p_user; $pass = (string)$p_pass; $returnValue = false; if ($user != '' && $pass != '') { // Allow login via username or email. $sql = "SELECT account_id, account_identity, uname, player_id FROM accounts join account_players on account_id=_account_id join players on player_id = _player_id WHERE lower(account_identity) = lower(:login) OR lower(uname) = lower(:login) AND phash = crypt(:pass, phash)"; $returnValue = query_row($sql, array(':login'=>$user, ':pass'=>$pass)); } return $returnValue; } Notably, I have added the WHERE lower(account_identity) = lower(:login) OR lower(uname) = lower(:login) ...etc section to allow graceful backwards compatibility for users who won't be used to using their email for the login procedure. I'm not completely sure that that OR is safe, though. Are there some ways that I should tighten the security of the php code above?

    Read the article

  • Learn Prolog Now! DCG Practice Example

    - by Timothy
    I have been progressing through Learn Prolog Now! as self-study and am now learning about Definite Clause Grammars. I am having some difficulty with one of the Practical Session's tasks. The task reads: The formal language anb2mc2mdn consists of all strings of the following form: an unbroken block of as followed by an unbroken block of bs followed by an unbroken block of cs followed by an unbroken block of ds, such that the a and d blocks are exactly the same length, and the c and d blocks are also exactly the same length and furthermore consist of an even number of cs and ds respectively. For example, ε, abbccd, and aaabbbbccccddd all belong to anb2mc2mdn. Write a DCG that generates this language. I am able to write rules that generate andn, b2mc2m, and even anb2m and c2mndn... but I can't seem to join all these rules into anb2mc2mdn. The following are my rules that can generate andn and b2mc2m. s1 --> []. s1 --> a,s1,d. a --> [a]. d --> [d]. s2 --> []. s2 --> c,c,s2,d,d. c --> [c]. d --> [d]. Is anb2mc2mdn really a CFG, and is it possible to write a DCG using only what was taught in the lesson (no additional arguments or code, etc)? If so, can anyone offer me some guidance how I can join these so that I can solve the given task?

    Read the article

  • Data historian queries

    - by Scott Dennis
    Hi, I have a table that contains data for electric motors the format is: DATE(DateTime) | TagName(VarChar(50) | Val(Float) | 2009-11-03 17:44:13.000 | Motor_1 | 123.45 2009-11-04 17:44:13.000 | Motor_1 | 124.45 2009-11-05 17:44:13.000 | Motor_1 | 125.45 2009-11-03 17:44:13.000 | Motor_2 | 223.45 2009-11-04 17:44:13.000 | Motor_2 | 224.45 Data for each motor is inserted daily, so there would be 31 Motor_1s and 31 Motor_2s etc. We do this so we can trend it on our control system displays. I am using views to extract last months max val and last months min val. Same for this months data. Then I join the two and calculate the difference to get the actual run hours for that month. The "Val" is a nonresetable Accumulation from a PLC(Controller). This is my query for Last months Max Value: SELECT TagName, Val AS Hours FROM dbo.All_Data_From_Last_Mon AS cur WHERE (NOT EXISTS (SELECT TagName, Val FROM dbo.All_Data_From_Last_Mon AS high WHERE (TagName = cur.TagName) AND (Val > cur.Val))) This is my query for Last months Max Value: SELECT TagName, Val AS Hours FROM dbo.All_Data_From_Last_Mon AS cur WHERE (NOT EXISTS (SELECT TagName, Val FROM dbo.All_Data_From_Last_Mon AS high WHERE (TagName = cur.TagName) AND (Val < cur.Val))) This is the query that calculates the difference and runs a bit slow: SELECT dbo.Motors_Last_Mon_Max.TagName, STR(dbo.Motors_Last_Mon_Max.Hours - dbo.Motors_Last_Mon_Min.Hours, 12, 2) AS Hours FROM dbo.Motors_Last_Mon_Min RIGHT OUTER JOIN dbo.Motors_Last_Mon_Max ON dbo.Motors_Last_Mon_Min.TagName = dbo.Motors_Last_Mon_Max.TagName I know there is a better way. Ultimately I just need last months total and this months total. Any help would be appreciated. Thanks in advance

    Read the article

  • How can I make Access think there is a primary key

    - by user3692757
    I have a table and I'm trying to join it with another table, but it doesn't have a distinctive primary key. The two tables do share similarities, “Acct” and “Location”. If I could concatenate “Acct&Location” it would become a primary key, but Access won’t let me make a primary key from a calculation. I provided a small sample below. Each hospital has an “Acct”, but the “Acct” will show up once for each “Location”. How can I make join these in a relationship? I connected the two in a relationships and tried to “Enforce Referential Integrity”, but it indicated “No unique index found for the referenced field of the primary key”. Also, if I run a “Find UnMatched Query” it doesn’t find anything. I think its because I can’t make it realize that in combination “Acct” and “Location” can be perceived as primary keys when used in conjunction of each other. Acct 1 2 3 1 2 3 1 2 3| Location ABI ABI ABI NHO NHO NHO NTX NTX NTX I tried to load an image to illustrate it better, but I haven't made enough post.

    Read the article

  • MySQL - What is wrong with this query or my database? Terrible performance.

    - by Moss
    SELECT * from `employees` a LEFT JOIN (SELECT phone1 p1, count(*) c, FROM `employees` GROUP BY phone1) b ON a.phone1 = b.p1; I'm not sure if it is this query in particular that has the problem. I have been getting terrible performance in general with this database. The table in question has 120,000 rows. I have tried this particular query remotely and locally with the MyISAM and InnoDB engines, with different types of joins, and with and without an index on phone1. I can get this to complete in about 4 minutes on a 10,000 row table successfully but performance drops exponentially with larger tables. Remotely it will lose connection to the server and locally it brings my system to its knees and seems to go on forever. This query is only a smaller step I was trying to do when a larger query couldn't complete. Maybe I should explain the whole scenario. I have one big flat ugly table that lists a bunch of people and their contact info and the info of the companies they work for. I'm trying to normalize the database and intelligently determine which phone numbers apply to individual people and which apply to an office location. My reasoning is that if a phone number occurs multiple times and the number of occurrence equals the number of times that the street address it is attached to occurs then it must be an office number. So the first step is to count each phone number grouping by phone number. Normally if you just use COUNT()...GROUP BY it will only list the first record it finds in that group so I figured I have to join the full table to the count table where the phone number matches. This does work but as I said I can't successfully complete it on any table much larger than 10,000 rows. This seems pathetic and this doesn't seem like a crazy query to do. Is there a better way to achieve what I want or do I have to break my large table into 12 pieces or is there something wrong with the table or db?

    Read the article

  • Selecting random top 3 listings per shop for a range of active advertising shops

    - by GraGra33
    I’m trying to display a list of shops each with 3 random items from their shop, if they have 3 or more listings, that are actively advertising. I have 3 tables: one for the shops – “Shops”, one for the listings – “Listings” and one that tracks active advertisers – “AdShops”. Using the below statement, the listings returned are random however I’m not getting exactly 3 listings (rows) returned per shop. SELECT AdShops.ID, Shops.url, Shops.image_url, Shops.user_name AS shop_name, Shops.title, L.listing_id AS listing_id, L.title AS listing_title, L.price as price, L.image_url AS listing_image_url, L.url AS listing_url FROM AdShops INNER JOIN Shops ON AdShops.user_id = Shops.user_id INNER JOIN Listings AS L ON Shops.user_id = L.user_id WHERE (Shops.is_vacation = 0 AND Shops.listing_count > 2 AND L.listing_id IN (SELECT TOP 3 L2.listing_id FROM Listings AS L2 WHERE L2.listing_id IN (SELECT TOP 100 PERCENT L3.listing_id FROM Listings AS L3 WHERE (L3.user_id = L.user_id) ) ORDER BY NEWID() ) ) ORDER BY Shops.shop_name I’m stumped. Anyone have any ideas on how to fix it? The ideal solution would be one record per store with the 3 listings (and associated data) were in columns and not rows – is this possible?

    Read the article

< Previous Page | 117 118 119 120 121 122 123 124 125 126 127 128  | Next Page >