Search Results

Search found 34207 results on 1369 pages for 'query output'.

Page 468/1369 | < Previous Page | 464 465 466 467 468 469 470 471 472 473 474 475  | Next Page >

  • Subversion commands not being run by Ubuntu rc.local

    - by talentedmrjones
    Here is my rc.local for an autoscaling amazon ec2 instance based on ubuntu: (Note that user names, domains, and paths have been changed for security purposes) logger "Begin rc.local startup script:" logger "svn checkout" sudo -u nonRootUser /usr/bin/svn co svn+ssh://[email protected]/path/to/repo /var/www/html | logger logger "chown writeable folder" chown www-data /var/www/html/writeableFolder logger "restart apache" /etc/init.d/apache2 restart | logger exit 0 And here is the output of sudo tail -n 40 /var/log/syslog Mar 10 22:05:20 ubuntu logger: Begin rc.local startup script: Mar 10 22:05:20 ubuntu logger: svn checkout Mar 10 22:05:20 ubuntu logger: chown writeable folder Of course its not getting to apache2 restart because it error'd on the chown. I did find however that if I do a checkout beforehand, and set the rc.local svn command to an svn update, that it still does not run the svn command but does output apache2 restart successfully. These same svn commands work perfectly when I run them manually, tho it's strange that within rc.local they do not produce any output whatsoever to logger yet apache2 restart does. I've also tried running the svn co and svn update both with sudo -u and without. How do I get the svn command to run? Either a full checkout or an update. At this point either would be better than nothing!

    Read the article

  • Create Session Variable from different datasources?

    - by Szafranamn
    Currently I am developing a dynamic website using Dreamweaver cs5 with ColdFusion 9 and using Access to create my databases along with QuickBooks and QODBC to create database. I have established a login session variable stemming from the login page. This session variable is being drawn from one Datasource "Access" Table "Logininfo" Field "FullName" but I wanted to create another session variable either at this point or further into the member's page to use in a query sequence. This session variable would stem from another Datasoucre "QBs" Table "Invoice" Field "CustomerRefFullName" which is generated through Quickbooks and QODBC. I am not sure if this is possible but if it is how do I do it. I want to do this so I can query the Invoice database to upload the customer's Invoices unique to them onto their page. So it would have to be related to their login credentials. If there is another better route to take I would greatly appreciate the advice. Below is the login code if there is additional information needed let me know. This is my current thinking/plan to do what I wish to intend hence the need to create the session variable: I have another Datasource "QBs" with a Table "Invoice" when I create another webpage for the customer to see their invoice I need to create a recordset that accesses that Table. In order to do so I think the best way would some home convert the session.FullName (which came from Access Datasource, Logininfor Table) into a session.CustomerRefFullName (which would have to come from (Datasource: QBs Table: Invoice Field: CustomerRefFullName) that way I could set the query WHERE CustomerRefFullName and have each logged in user see their specific Invoices. So is there a way to turn the session variable off one datasource/table into a different sessionvariable off a new datasource/table even if it is unique just to that page??? <cfif IsDefined("FORM.username")> SELECT FullName, Username,Password,AccessLevels FROM Logininfo WHERE Username= AND Password=

    Read the article

  • ios UITableView reloadData don't increase content height

    - by Zoltan Varadi
    I'm facing a strange error in my iOS app and haven't been able to figure out the reason why. In my UITableView i can add new cells, so i refresh the array that is the datasource and call reloadData. Everything works without error. The numberOfRowsInSection is called again, and it's value is one more than it's previous value was. The new cell even gets inserted at the bottom of the tableview, but i cant scroll down to it. I only know that it's there because the tableview bounces and i can see it. I'm guessing the tableview's content height is not getting increased for some reason, but i have no idea why. I'm using iOS 6 btw. Any help is very much appreciated! Thanks, Zoli EDIT, answers for Srikanth's questions: How is data getting added. There's an NSArray containing objects of a specific type. The array.count is the number of the number of cells. This NSArray gets its values from a database query. When you say, you are refreshing the array, what do you mean. By refreshing the array i mean i execute a new query in the database and put the results of this query into an NSArray. this will be the tableview's datasource array. Like this: dataSourceArray = [dbManager executeQuery]; [tableView reloadData]; Are you adding the data within the same view controller Yes. Can you show some code as to what you are doing See above. Is the table view at the root of the view controllers view or is it within another view The tableview is the main view's first child. Can you try adding data to the array at the beginning, so that you can view the cell being added at the top. I don't understand what you mean.

    Read the article

  • Conditional row count in linq to nhibernate doesn't work

    - by Lucasus
    I want to translate following simple sql query into Linq to NHibernate: SELECT NewsId ,sum(n.UserHits) as 'HitsNumber' ,sum(CASE WHEN n.UserHits > 0 THEN 1 ELSE 0 END) as 'VisitorsNumber' FROM UserNews n GROUP BY n.NewsId My simplified UserNews class: public class AktualnosciUzytkownik { public virtual int UserNewsId { get; set; } public virtual int UserHits { get; set; } public virtual User User { get; set; } // UserId key in db table public virtual News News { get; set; } // NewsId key in db table } I've written following linq query: var hitsPerNews = (from n in Session.Query<UserNews>() group n by n.News.NewsId into g select new { NewsId = g.Key, HitsNumber = g.Sum(x => x.UserHits), VisitorsNumber = g.Count(x => x.UserHits > 0) }).ToList(); But generated sql just ignores my x => x.UserHits > 0 statement and makes unnecessary 'left outer join': SELECT news1_.NewsId AS col_0_0_, CAST(SUM(news0_.UserHits) AS INT) AS col_1_0_, CAST(COUNT(*) AS INT) AS col_2_0_ FROM UserNews news0_ LEFT OUTER JOIN News news1_ ON news0_.NewsId=news1_.NewsId GROUP BY news1_.NewsId How Can I fix or workaround this issue? Maybe this can be done better with QueryOver syntax?

    Read the article

  • php - create columns from mysql list

    - by user271619
    I have a long list generated from a simple mysql select query. Currently (shown in the code below) I am simply creating list of table rows with each record. So, nothing complicated. However, I want to divide it into more than one column, depending on the number of returned results. I've been wrapping my brain around how to count this in the php, and I'm not getting the results I need. <table> <? $query = mysql_query("SELECT * FROM `sometable`"); while($rows = mysql_fetch_array($query)){ ?> <tr> <td><?php echo $rows['someRecord']; ?></td> </tr> <? } ?> </table> Obviously there's one column generated. So if the records returned reach 10, then I want to create a new column. In other words, if the returned results are 12, I have 2 columns. If I have 22 results, I'll have 3 columns, and so on.

    Read the article

  • How to iterate JPA collections in Google App engine

    - by palto
    Hi I use Google App Engine with datanucleus and JPA. I'm having a real hard time grasping how I'm supposed to read stuff from data store and pass it to JSP. If I load a list of POJOs with entitymanager and pass it to JSP, it crashes to org.datanucleus.exceptions.NucleusUserException: Object Manager has been closed. I understand why this is happening. Obviously because I fetch the list, close the entity manager and pass it to JSP, at which point it will fail because the list is lazy. How do I make the list NOT lazy without resorting to hacks like calling size() or something like that? Here is what I'm trying to do: @Override protected void doGet(HttpServletRequest req, HttpServletResponse resp) throws ServletException, IOException { req.setAttribute("parties", getParties()); RequestDispatcher dispatcher = getServletContext().getRequestDispatcher("/WEB-INF/parties.jsp"); dispatcher.forward(req, resp); } private List<Party> getParties(){ EntityManager em = entityManagerProvider.get(); try{ Query query = em.createQuery("SELECT p FROM Party p"); return query.getResultList(); }finally{ em.close(); } }

    Read the article

  • CakePHP: Need help using saveField to update a fields in a belongsTo model

    - by afrisch
    I am trying to update a password into two different models/tables in CakePHP. I can update it fine in the parent model, but not the second model. Models: Users (hasOne GameProfile) PK=id Gameprofiles (belongsTo User) FK=user_id Here is a stripped down version of my function in the Users_controller.php: function updatepass() { if (!empty($this->data)) { $this->User->id = $this->Auth->user('id'); $this->User->saveField('sha1password', $this->Auth->password($this->data['User']['newpass'])); $this->User->Gameprofile->saveField('plainpassword', $this->data['User']['newpass']); } } When I execute the function, the users table is updated fine. But the gameprofile table is not updated, rather Cake does an insert. SQL Query Log: 1195 Query UPDATE `users` SET `sha1password` = 'e9443e9f5e1a07832aad1b2f84de1a666daf89b5' WHERE `users`.`id` = 30 1195 Query INSERT INTO `gameprofiles` (`plainpassword`) VALUES ('abc') Is there a way to get CakePHP to do an update using saveField on a model with a belongsTo attribute? I've tried various ways to refer to user_id before executing the second saveField, but just can't seem to find the winning combination. Any help is greatly appreciated!

    Read the article

  • SQL Grouping with multiple joins combining results incorrectly

    - by Matt
    Hi I'm having trouble with my query combining records when it shouldn't. I have two tables Authors and Publications, they are related by Publication ID in a many to many relationship. As each author can have many publications and each publication has many Authors. I want my query to return every publication for a set of authors and include the ID of each of the other authors that have contributed to the publication grouped into one field. (I am working with mySQL) I have tried to picture it graphically below Table: authors Table:publications AuthorID | PublicationID PublicationID | PublicationName 1 | 123 123 | A 1 | 456 456 | B 2 | 123 789 | C 2 | 789 3 | 123 3 | 456 I want my result set to be the following AuthorID | PublicationID | PublicationName | AllAuthors 1 | 123 | A | 1,2,3 1 | 456 | B | 1,3 2 | 123 | A | 1,2,3 2 | 789 | C | 2 3 | 123 | A | 1,2,3 3 | 456 | B | 1,3 This is my query Select Author1.AuthorID, Publications.PublicationID, Publications.PubName, GROUP_CONCAT(TRIM(Author2.AuthorID)ORDER BY Author2.AuthorID ASC)AS 'AuthorsAll' FROM Authors AS Author1 LEFT JOIN Authors AS Author2 ON Author1.PublicationID = Author2.PublicationID INNER JOIN Publications ON Author1.PublicationID = Publications.PublicationID WHERE Author1.AuthorID ="1" OR Author1.AuthorID ="2" OR Author1.AuthorID ="3" GROUP BY Author2.PublicationID But it returns the following instead AuthorID | PublicationID | PublicationName | AllAuthors 1 | 123 | A | 1,1,1,2,2,2,3,3,3 1 | 456 | B | 1,1,3,3 2 | 789 | C | 2 It does deliver the desired output when there is only one AuhorID in the where statement. I have not been able to figure it out, does anyone know where i'm going wrong?

    Read the article

  • Catchable fatal error: Object of class view_user could not be converted to string in C:\xampp\xampp\htdocs\textile\admin\view_class.php on line 9 [closed]

    - by user1282142
    <?php include('config.inc'); class view_user { function view($user) { $temp = mysql_query("SELECT * from register_user where user_name = '$user'"); $value = mysql_fetch_array($temp); $query = mysql_query("SELECT order_user.design_name, order_user.reaad, order_user.pick, order_user.yarn_detail FROM register_user INNER JOIN order_user ON register_user.register_id = '$value[register_id]' and order_user.register_id = $value[register_id]"); if($query) { echo "<center><table border = 1> <tr> <th>Design Name</th> <th>Read</th> <th>Pick</th> <th>Design Detail</th> </tr>"; while($row = mysql_fetch_array($query)) { echo "<tr>"; echo "<td>$row[design_name]</td>"; echo "<td>$row[reaad]</td>"; echo "<td>$row[pick]</td>"; echo "<td>$row[yarn_detail]</td>"; echo"</tr></center>"; } } else { echo"User doesn't Exist"; include('footer1.php'); } } } ?>

    Read the article

  • why does setting stderr=subprocess.STDOUT fix a subprocess.check_output call?

    - by ShankarG
    I have a python script running on a small server that is called in three different ways - from within another python script, by cron, or by gammu-smsd (an SMS daemon with the wonderful mobile utility [gammu]). The script is for maintenance and contained the following kludge to measure used space on the system (presumably this is possible from within Python, but this was quick and dirty): reportdict['Used Space'] = subprocess.check_output(["df / | tail -1 | awk '{ print $5; }'"], shell=True)[0:-1] Oddly enough this line would only fail when the script was called by a shell script running from gammu-smsd. The line would fail with a CalledProcessError exception saying "returned exit status 2", even though the output attribute of the CalledProcessError object contained the correct output. The only command in the sequence of shell commands that would give such an error status would be awk, with status 2 indicating a fatal error. If the python script with this line was called by cron, by another python script, or from the command line, this line would work fine. I broke my head trying to fix the environment for the script, thinking this must be the problem. Finally though I put in stderr=subprocess.STDOUT, like so: reportdict['Used Space'] = subprocess.check_output(["df / | tail -1 | awk '{ print $5; }'"], stderr=subprocess.STDOUT, shell=True)[0:-1] This was a debug measure to help me figure out if some output was coming on stderr. But after this the script started working, even when called from gammu-smsd! Why might this be the case? I ask for future reference when using subprocess...

    Read the article

  • Memory leaks after using typeinfo::name()

    - by icabod
    I have a program in which, partly for informational logging, I output the names of some classes as they are used (specifically I add an entry to a log saying along the lines of Messages::CSomeClass transmitted to 127.0.0.1). I do this with code similar to the following: std::string getMessageName(void) const { return std::string(typeid(*this).name()); } And yes, before anyone points it out, I realise that the output of typeinfo::name is implementation-specific. According to MSDN The type_info::name member function returns a const char* to a null-terminated string representing the human-readable name of the type. The memory pointed to is cached and should never be directly deallocated. However, when I exit my program in the debugger, any "new" use of typeinfo::name() shows up as a memory leak. If I output the information for 2 classes, I get 2 memory leaks, and so on. This hints that the cached data is never being freed. While this is not a major issue, it looks messy, and after a long debugging session it could easily hide genuine memory leaks. I have looked around and found some useful information (one SO answer gives some interesting information about how typeinfo may be implemented), but I'm wondering if this memory should normally be freed by the system, or if there is something i can do to "not notice" the leaks when debugging. I do have a back-up plan, which is to code the getMessageName method myself and not rely on typeinfo::name, but I'd like to know anyway if there's something I've missed.

    Read the article

  • Python Speeding Up Retrieving data from extremely large string

    - by Burninghelix123
    I have a list I converted to a very very long string as I am trying to edit it, as you can gather it's called tempString. It works as of now it just takes way to long to operate, probably because it is several different regex subs. They are as follow: tempString = ','.join(str(n) for n in coords) tempString = re.sub(',{2,6}', '_', tempString) tempString = re.sub("[^0-9\-\.\_]", ",", tempString) tempString = re.sub(',+', ',', tempString) clean1 = re.findall(('[-+]?[0-9]*\.?[0-9]+,[-+]?[0-9]*\.?[0-9]+,' '[-+]?[0-9]*\.?[0-9]+'), tempString) tempString = '_'.join(str(n) for n in clean1) tempString = re.sub(',', ' ', tempString) Basically it's a long string containing commas and about 1-5 million sets of 4 floats/ints (mixture of both possible),: -5.65500020981,6.88999986649,-0.454999923706,1,,,-5.65500020981,6.95499992371,-0.454999923706,1,,, The 4th number in each set I don't need/want, i'm essentially just trying to split the string into a list with 3 floats in each separated by a space. The above code works flawlessly but as you can imagine is quite time consuming on large strings. I have done a lot of research on here for a solution but they all seem geared towards words, i.e. swapping out one word for another. EDIT: Ok so this is the solution i'm currently using: def getValues(s): output = [] while s: # get the three values you want, discard the 3 commas, and the # remainder of the string v1, v2, v3, _, _, _, s = s.split(',', 6) output.append("%s %s %s" % (v1.strip(), v2.strip(), v3.strip())) return output coords = getValues(tempString) Anyone have any advice to speed this up even farther? After running some tests It still takes much longer than i'm hoping for. I've been glancing at numPy, but I honestly have absolutely no idea how to the above with it, I understand that after the above has been done and the values are cleaned up i could use them more efficiently with numPy, but not sure how NumPy could apply to the above. The above to clean through 50k sets takes around 20 minutes, I cant imagine how long it would be on my full string of 1 million sets. I'ts just surprising that the program that originally exported the data took only around 30 secs for the 1 million sets

    Read the article

  • Get XML-RPC (Andorid - PHP) web service different params type

    - by Jovan
    Hi, I want to create XML-RPC web service for Andorid (client) - PHP (Server) communication I create XML-RPC PHP web service server using this tutorial: http://articles.sitepoint.com/article/own-web-service-php-xml-rpc/5 For andorid client web service I use this project: http://code.google.com/p/android-xmlrpc/ server and client communication is OK, but I have problem with getting params from andorid client to php server. From andorid client I send two params (first integer and second float number) Object[] params = { 3, 3.6f, }; method.call(params); , but I don't know how to handle this parameters in php server?? in php server there is some function , but with only one param ($news_id): function news_viewNewsItem ( $news_id ) { /* Define the query to fetch the news item */ $query = "SELECT * FROM kd_xmlrpc_news WHERE news_id = '" . $news_id . "'"; $sql = mysql_query ( $query ); if ( $result = mysql_fetch_array ( $sql ) ) { /* Extract the variables for sending in our server response */ $news_item['news_id'] = $result['news_id']; $news_item['date'] = XMLRPC_convert_timestamp_to_iso8601( mysql_datetime_to_timestamp( $result['date'] ) ); $news_item['title'] = $result['title']; $news_item['full_desc'] = $result['full_desc']; $news_item['author'] = $result['author']; /* Respond to the client with the news item */ XMLRPC_response(XMLRPC_prepare($news_item), KD_XMLRPC_USERAGENT); } else { /* If there was an error, respond with a fault code instead */ XMLRPC_error("1", "news_viewNewsItem() error: Unable to read news:" . mysql_error(), KD_XMLRPC_USERAGENT); } } In server.py file there is functions for every method but I dont know how to write in php server: def add(self, x, y): print print "input x=%s, y=%s" % (str(x), str(y)) sum = x + y print "output", sum print return sum Can some one help me with code , and tell me how to handle various types from client to server?? Thanks and Happy New Year

    Read the article

  • Comparing strings that contain formatting in C#

    - by Finglas
    I'm working on a function that given some settings - such as line spacing, the output (in string form) is modified. In order to test such scenarios, I'm using string literals, as shown below for the expected result. The method, using a string builder, (AppendLine) generates the said output. One issue I have run into is that of comparing such strings. In the example below, both are equal in terms of what they represent. The result is the area which I care about, however when comparing two strings, one literal, one not, equality naturally fails. This is because one of the strings emits line spacing, while the other only demonstrates the formatting it contains. What would be the best way of solving this equality problem? I do care about formatting such as new lines from the result of the method, this is crucially important. Code: string expected = @"Test\n\n\nEnd Test."; string result = "Test\n\n\nEnd Test"; Console.WriteLine(expected); Console.WriteLine(result); Output: Test\n\n\nEnd Test. Test End Test

    Read the article

  • Data historian queries

    - by Scott Dennis
    Hi, I have a table that contains data for electric motors the format is: DATE(DateTime) | TagName(VarChar(50) | Val(Float) | 2009-11-03 17:44:13.000 | Motor_1 | 123.45 2009-11-04 17:44:13.000 | Motor_1 | 124.45 2009-11-05 17:44:13.000 | Motor_1 | 125.45 2009-11-03 17:44:13.000 | Motor_2 | 223.45 2009-11-04 17:44:13.000 | Motor_2 | 224.45 Data for each motor is inserted daily, so there would be 31 Motor_1s and 31 Motor_2s etc. We do this so we can trend it on our control system displays. I am using views to extract last months max val and last months min val. Same for this months data. Then I join the two and calculate the difference to get the actual run hours for that month. The "Val" is a nonresetable Accumulation from a PLC(Controller). This is my query for Last months Max Value: SELECT TagName, Val AS Hours FROM dbo.All_Data_From_Last_Mon AS cur WHERE (NOT EXISTS (SELECT TagName, Val FROM dbo.All_Data_From_Last_Mon AS high WHERE (TagName = cur.TagName) AND (Val > cur.Val))) This is my query for Last months Max Value: SELECT TagName, Val AS Hours FROM dbo.All_Data_From_Last_Mon AS cur WHERE (NOT EXISTS (SELECT TagName, Val FROM dbo.All_Data_From_Last_Mon AS high WHERE (TagName = cur.TagName) AND (Val < cur.Val))) This is the query that calculates the difference and runs a bit slow: SELECT dbo.Motors_Last_Mon_Max.TagName, STR(dbo.Motors_Last_Mon_Max.Hours - dbo.Motors_Last_Mon_Min.Hours, 12, 2) AS Hours FROM dbo.Motors_Last_Mon_Min RIGHT OUTER JOIN dbo.Motors_Last_Mon_Max ON dbo.Motors_Last_Mon_Min.TagName = dbo.Motors_Last_Mon_Max.TagName I know there is a better way. Ultimately I just need last months total and this months total. Any help would be appreciated. Thanks in advance

    Read the article

  • What is the difference between .get() and .fetch(1)

    - by AutomatedTester
    I have written an app and part of it is uses a URL parser to get certain data in a ReST type manner. So if you put /foo/bar as the path it will find all the bar items and if you put /foo it will return all items below foo So my app has a query like data = Paths.all().filter('path =', self.request.path).get() Which works brilliantly. Now I want to send this to the UI using templates {% for datum in data %} <div class="content"> <h2>{{ datum.title }}</h2> {{ datum.content }} </div> {% endfor %} When I do this I get data is not iterable error. So I updated the Django to {% for datum in data.all %} which now appears to pull more data than I was giving it somehow. It shows all data in the datastore which is not ideal. So I removed the .all from the Django and changed the datastore query to data = Paths.all().filter('path =', self.request.path).fetch(1) which now works as I intended. In the documentation it says The db.get() function fetches an entity from the datastore for a Key (or list of Keys). So my question is why can I iterate over a query when it returns with fetch() but can't with get(). Where has my understanding gone wrong?

    Read the article

  • Questions about the Backpropogation Algorithm

    - by Colemangrill
    I have a few questions concerning backpropogation. I'm trying to learn the fundamentals behind neural network theory and wanted to start small, building a simple XOR classifier. I've read a lot of articles and skimmed multiple textbooks - but I can't seem to teach this thing the pattern for XOR. Firstly, I am unclear about the learning model for backpropogation. Here is some pseudo-code to represent how I am trying to train the network. [Lets assume my network is setup properly (ie: multiple inputs connect to a hidden layer connect to an output layer and all wired up properly)]. SET guess = getNetworkOutput() // Note this is using a sigmoid activation function. SET error = desiredOutput - guess SET delta = learningConstant * error * sigmoidDerivative(guess) For Each Node in inputNodes For Each Weight in inputNodes[n] inputNodes[n].weight[j] += delta; // At this point, I am assuming the first layer has been trained. // Then I recurse a similar function over the hidden layer and output layer. // The prime difference being that it further divi's up the adjustment delta. I realize this is probably not enough to go off of, and I will gladly expound on any part of my implementation. Using the above algorithm, my neural network does get trained, kind of. But not properly. The output is always XOR 1 1 [smallest number] XOR 0 0 [largest number] XOR 1 0 [medium number] XOR 0 1 [medium number] I can never train the [1,1] [0,0] to be the same value. If you have any suggestions, additional resources, articles, blogs, etc for me to look at I am very interested in learning more about this topic. Thank you for your assistance, I appreciate it greatly!

    Read the article

  • Calling a jQuery plugin inside itself

    - by Real Tuty
    I am trying to create a comet like thing. I have a plugin that collects data from a php page. The problem is that i dont know how to call the plugin inside itself. If it were a function i could go like this: function j () {setTimeout(j(), 1000);}, but i am using a jQuery plugin. Here is my plugin code: (function($) { $.fn.watch = function(ops) { var $this_ = this, setngs = $.extend({ 'type' : 'JSON', 'query' : 'GET', 'url' : '', 'data' : '', 'wait' : 1000 }, ops); if (setngs.type === '') { return false; } else if (setngs.query === '') { return false; } else if (setngs.url === '') { return false; } else if (setngs.wait === '') { return false; } else if (setngs.wait === 0) { setngs.wait = 1000; } var xhr = $.ajax({ type : setngs.query, dataType : setngs.type, url : setngs.url, success : function(data) { var i = 0; for (i = 0; i < data.length; i++) { var html = $this_.html(), str = '<li class="post" id="post-' + data[i].id + '"><div class="inner"><div class="user">' + data[i].user + '</div><div class="body">' + data[i].body + '</div></div></li>'; $this_.html(str + html); } setTimeout($this_, 1000); } }); }; })(jQuery); where it says setTimeout($this_, 1000); this is where im having trouble. I don't know what to call the plugin as. $this_ is what I thought might work but I am wrong. That is what i need to replace. Thanks for your help.

    Read the article

  • Unknown Argument Error "-p" when deploying to heroku.

    - by user3312278
    We are deploying a rails app to Heroku. The app should be making a youtube api call, using the Trollop Gem as a command line parser. We keep getting this error back. 2014-07-30T23:17:57.526014+00:00 app[web.1]: Error: unknown argument '-p'. 2014-07-30T23:17:57.526020+00:00 app[web.1]: Try --help for help. 2014-07-30T23:17:57.526541+00:00 app[web.1]: Completed 500 Internal Server Error in 7466ms This is what our Trollop code looks like. def self.youtube_search(query) p ENV["YOUTUBE_DEVELOPER_KEY"] p query p "point of no return" p "*"*25 youtube_service_api_name = "youtube" youtube_api_version = "v3" # opts = HTTParty.get("https://www.youtube.com/results?search_query=russia") opts = Trollop::options do opt :q, 'Search term', :source => String, :default => query opt :maxResults, 'Max results', :source => :int, :default => 25 end What's much stranger is that it was working an hour ago and now it's not. Does anyone have any ideas? This doesn't seem to be documented anywhere.

    Read the article

  • Linq to xml, retreaving generic interface-based list

    - by Rita
    I have an xml document that looks like this <Elements> <Element> <DisplayName /> <Type /> </Element> </Elements> I have an interface, interface IElement { string DisplayName {get;} } and a couple of derived classes: public class AElement: IElement public class BElement: IElement What I want to do, is to write the most efficient query to iterate through the xml and create a list of IElement, containing AElement or BElement, based on the 'Type' property in the xml. So far I have this: IEnumerable<AElement> elements = from xmlElement in XElement.Load(path).Elements("Element") where xmlElement.Element("type").Value == "AElement" select new AElement(xmlElement.Element("DisplayName").Value); return elements.Cast<IElement>().ToList(); But this is only for AElement, is there a way to add BElement in the same query, and also make it generic IEnumerable? Or would I have to run this query once for each derived type?

    Read the article

  • Using EhCache for session.createCriteria(...).list()

    - by James Smith
    I'm benchmarking the performance gains from using a 2nd level cache in Hibernate (enabling EhCache), but it doesn't seem to improve performance. In fact, the time to perform the query slightly increases. The query is: session.createCriteria(MyEntity.class).list(); The entity is: @Entity @Cache(usage = CacheConcurrencyStrategy.NONSTRICT_READ_WRITE) public class MyEntity { @Id @GeneratedValue private long id; @Column(length=5000) private String data; //---SNIP getters and setters--- } My hibernate.cfg.xml is: <!-- all the normal stuff to get it to connect & map the entities plus:--> <property name="hibernate.cache.region.factory_class"> net.sf.ehcache.hibernate.EhCacheRegionFactory </property> The MyEntity table contains about 2000 rows. The problem is that before adding in the cache, the query above to list all entities took an average of 65 ms. After the cache, they take an average of 74 ms. Is there something I'm missing? Is there something extra that needs to be done that will increase performance?

    Read the article

  • JOIN (SELECT DISTINCT [..] substitute

    - by FRKT
    Hello, I'd like to find a substitute for using SELECT DISTINCT in a derived table. Let's say I have three tables: CREATE TABLE `trades` ( `tradeID` int(11) unsigned NOT NULL AUTO_INCREMENT, `employeeID` int(11) unsigned NOT NULL, `corporationID` int(11) unsigned NOT NULL, `profit` int(11) NOT NULL, KEY `tradeID` (`tradeID`), KEY `employeeID` (`employeeID`), KEY `corporationID` (`corporationID`) ) ENGINE=MyISAM DEFAULT CHARSET=latin1 CREATE TABLE `corporations` ( `corporationID` int(11) unsigned NOT NULL AUTO_INCREMENT, `name` varchar(255) NOT NULL, PRIMARY KEY (`corporationID`) ) ENGINE=MyISAM DEFAULT CHARSET=latin1 CREATE TABLE `employees` ( `employeeID` int(11) unsigned NOT NULL AUTO_INCREMENT, `name` varchar(255) NOT NULL, PRIMARY KEY (`employeeID`) ) ENGINE=MyISAM DEFAULT CHARSET=latin1 Let's say I'd like to find out how much profit a specific employee has generated. Simple: SELECT SUM(profit) FROM trades JOIN employees ON trades.employeeID = employees.employeeID AND employees.employeeID = 1; It gets trickier if I'd like to query how much revenue a specific corporation has, however. I cannot simply replicate the aforementioned query, because two or more employees from the same company might be involved in the same trade. This query should do the trick: SELECT SUM(profit) FROM trades JOIN (SELECT DISTINCT tradeID FROM trades WHERE trades.corporationID = 1) ... unfortunately, DISTINCT JOINs seem crazy ineffective. Is there any alternative I can use to determine how much revenue a corporation has, taking into account that a corporation might be listed several times with the same tradeID?

    Read the article

  • IndexOutOfRangeException when a stream is a multiple of the buffer size

    - by dnord
    I don't have a lot of experience with streams and buffers, but I'm having to do it for a project, and I'm stuck on an exception being thrown when the stream I'm reading is a multiple of the buffer size I've chosen. Let me show you: My code starts by reading bufferSize (100, let's say) bytes from the stream: numberOfBytesRead = DataReader.GetBytes(0, index, output, 0, bufferSize); Then, I loop through a while loop: while (numberOfBytesRead == bufferSize) { BufferWriter.Write(output); BufferWriter.Flush(); index += bufferSize; numberOfBytesRead = DataReader.GetBytes(0, index, output, 0, bufferSize); } ... and, once we get to a non-bufferSize read, we know we've hit the end of the stream and can move on. But if the bufferSize is 100, and the stream is 200, we'll read positions 0-99, 100-199, and then the attempt to read 200-299 errors out. I'd like it if it returned 0, but it throws an error. What I'm doing to handle that is, well, a try-catch: catch (System.IndexOutOfRangeException) numberOfBytesRead = 0; ...which ends the loop, and successfully finishes the thing, but we all know I don't want to control code flow with error handling. Is there a better (more standard?) way to handle stream reading when the stream length is unknown? This seems like a small wrinkle in a fairly reasonable strategy for reading streams, but I just don't know if I've got it wrong or what. The specifics of this (which I've cleaned up a little bit for posting) are a MySqlDataReader hitting a LARGEBLOB column. It's working whenever the buffer is larger than the number of returned bytes, or when the number of returned bytes is not a multiple of bufferSize. Because we don't, in that case, throw an IndexOutOfRangeException.

    Read the article

  • Currently using View, Should I use a hard table instead?

    - by 1001010101
    I am currently debating whether my table, mapping_uGroups_uProducts, which is a view formed by the following table: CREATE ALGORITHM=UNDEFINED DEFINER=`root`@`localhost` SQL SECURITY DEFINER VIEW `db`.`mapping_uGroups_uProducts` AS select distinct `X`.`upID` AS `upID`,`Z`.`ugID` AS `ugID` from ((`db`.`mapping_uProducts_Products` `X` join `db`.`productsInfo` `Y` on((`X`.`pID` = `Y`.`pID`))) join `db`.`mapping_uGroups_Groups` `Z` on((`Y`.`gID` = `Z`.`gID`))); My current query is: SELECT upID FROM uProductsInfo \ JOIN fs_uProducts USING (upID) column \ JOIN mapping_uGroups_uProducts USING (upID) -- could be faster if we use hard table and index \ JOIN mapping_fs_key USING (fsKeyID) \ WHERE fsName="OVERALL" \ AND ugID=1 \ ORDER BY score DESC \ LIMIT 0,30; which is pretty slow. (for 30 results, it requires about 10 secondes). I think the reason for my query being so slow is definitely due to the fact that that particular query relies on a VIEW which has no index to speed things up. +----+-------------+----------------+--------+----------------+---------+---------+---------------------------------------+-------+---------------------------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+----------------+--------+----------------+---------+---------+---------------------------------------+-------+---------------------------------+ | 1 | PRIMARY | mapping_fs_key | const | PRIMARY,fsName | fsName | 386 | const | 1 | Using temporary; Using filesort | | 1 | PRIMARY | <derived2> | ALL | NULL | NULL | NULL | NULL | 19706 | Using where | | 1 | PRIMARY | uProductsInfo | eq_ref | PRIMARY | PRIMARY | 4 | mapping_uGroups_uProducts.upID | 1 | Using index | | 1 | PRIMARY | fs_uProducts | ref | upID | upID | 4 | db.uProductsInfo.upID | 221 | Using where | | 2 | DERIVED | X | ALL | PRIMARY | NULL | NULL | NULL | 40772 | Using temporary | | 2 | DERIVED | Y | eq_ref | PRIMARY | PRIMARY | 4 | db.X.pID | 1 | Distinct | | 2 | DERIVED | Z | ref | PRIMARY | PRIMARY | 4 | db.Y.gID | 2 | Using index; Distinct | +----+-------------+----------------+--------+----------------+---------+---------+---------------------------------------+-------+---------------------------------+ 7 rows in set (0.48 sec) The explain here looks pretty cryptic, and I don't know whether I should drop view and write a script to just insert everything in the view to a hard table. ( obviously, it will lose the flexibility of the view since the mapping changes quite frequently). Does anyone have any idea to how I can optimize my schema better?

    Read the article

  • Lucene.Net: How can I add a date filter to my search results?

    - by rockinthesixstring
    I've got my searcher working really well, however it does tend to return results that are obsolete. My site is much like NerdDinner whereby events in the past become irrelevant. I'm currently indexing like this Public Function AddIndex(ByVal searchableEvent As [Event]) As Boolean Implements ILuceneService.AddIndex Dim writer As New IndexWriter(luceneDirectory, New StandardAnalyzer(), False) Dim doc As Document = New Document doc.Add(New Field("id", searchableEvent.ID, Field.Store.YES, Field.Index.UN_TOKENIZED)) doc.Add(New Field("fullText", FullTextBuilder(searchableEvent), Field.Store.YES, Field.Index.TOKENIZED)) doc.Add(New Field("user", If(searchableEvent.User.UserName = Nothing, "User" & searchableEvent.User.ID, searchableEvent.User.UserName), Field.Store.YES, Field.Index.TOKENIZED)) doc.Add(New Field("title", searchableEvent.Title, Field.Store.YES, Field.Index.TOKENIZED)) doc.Add(New Field("location", searchableEvent.Location.Name, Field.Store.YES, Field.Index.TOKENIZED)) doc.Add(New Field("date", searchableEvent.EventDate, Field.Store.YES, Field.Index.UN_TOKENIZED)) writer.AddDocument(doc) writer.Optimize() writer.Close() Return True End Function Notice how I have a "date" index that stores the event date. My search then looks like this ''# code omitted Dim reader As IndexReader = IndexReader.Open(luceneDirectory) Dim searcher As IndexSearcher = New IndexSearcher(reader) Dim parser As QueryParser = New QueryParser("fullText", New StandardAnalyzer()) Dim query As Query = parser.Parse(q.ToLower) ''# We're using 10,000 as the maximum number of results to return ''# because I have a feeling that we'll never reach that full amount ''# anyways. And if we do, who in their right mind is going to page ''# through all of the results? Dim topDocs As TopDocs = searcher.Search(query, Nothing, 10000) Dim doc As Document = Nothing ''# loop through the topDocs and grab the appropriate 10 results based ''# on the submitted page number While i <= last AndAlso i < topDocs.totalHits doc = searcher.Doc(topDocs.scoreDocs(i).doc) IDList.Add(doc.[Get]("id")) i += 1 End While ''# code omitted I did try the following, but it was to no avail (threw a NullReferenceException). While i <= last AndAlso i < topDocs.totalHits If Date.Parse(doc.[Get]("date")) >= Date.Today Then doc = searcher.Doc(topDocs.scoreDocs(i).doc) IDList.Add(doc.[Get]("id")) i += 1 End If End While I also found the following documentation, but I can't make heads or tails of it http://lucene.apache.org/java/1_4_3/api/org/apache/lucene/search/DateFilter.html

    Read the article

< Previous Page | 464 465 466 467 468 469 470 471 472 473 474 475  | Next Page >