Search Results

Search found 5021 results on 201 pages for 'limit'.

Page 166/201 | < Previous Page | 162 163 164 165 166 167 168 169 170 171 172 173  | Next Page >

  • Are these settings correct for sending mail through Rails/Gmail?

    - by aressidi
    Hi there, I spend a good deal of time building an email system for my Rails app that uses Gmail to send bulk mail to a list of opt-in users. I realize a shortcomming of using Google Apps for my mail, namely a rate limit on the number of emails it will send out (i believe 500). Anyway, I have reached out to my users to see how many have received the email, and a lot of them have not, though some have. The list I tried sending to was about 540 users, so I would have expected more "yes, got it," then "nope, still waiting" responses. I have two questions: Do these settings look correct for outgoing bulk mailing through Gmail? Again, using google apps to manage my domain and i know some people (including myself) have received the mailer. This is in a mail.rb initializer in my app. ActionMailer::Base.delivery_method = :sendmail ActionMailer::Base.smtp_settings = { :address => "smtp.gmail.com", :port => 25, :domain => "mydomain.com", :authentication => :login, :user_name => "[email protected]", :password => "mypass" } Is there any way I can test if the mail was delivered, or at least attempted to be delivered? I can't tell where in the list the mailer stops mailing! The way I generate the list is through a query which then passes the user info to a mailer worker which sends the emails out via Starling/Workling. Any advice here would be useful. Happy to post code, but want to make sure the method I'm using is sound. Thanks for the help!

    Read the article

  • PHP File Upload using url parameters

    - by Arthur
    Is there a way to upload a file to server using php and the filename in a parameter (instead using a submit form), something like this: myserver/upload.php?file=c:\example.txt Im using a local server, so i dont have problems with filesize limit or upload function, and i have a code to upload file using a form <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "DTD/xhtml1-transitional.dtd"> <html> <body> <form action="<?php echo $_SERVER['PHP_SELF']; ?>" method="post" name="fileForm" enctype="multipart/form-data"> File to upload: <table> <tr><td><input name="upfile" type="file"></td></tr> <tr><td><input type="submit" name="submitBtn" value="Upload"></td></tr> </table> </form> <?php if (isset($_POST['submitBtn'])){ // Define the upload location $target_path = "c:\\"; // Create the file name with path $target_path = $target_path . basename( $_FILES['upfile']['name']); // Try to move the file from the temporay directory to the defined. if(move_uploaded_file($_FILES['upfile']['tmp_name'], $target_path)) { echo "The file ". basename( $_FILES['upfile']['name']). " has been uploaded"; } else{ echo "There was an error uploading the file, please try again!"; } } ?> </body> Thanks for the help

    Read the article

  • Sql query - selecting top 5 rows and further selecting rows only if User is present

    - by Gublooo
    Hello, I kind of stuck on how to implement this query - this is pretty similar to the query I posted earlier but I'm not able to crack it. I have a shopping table where everytime a user buys anything, a record is inserted. Some of the fields are * shopping_id (primary key) * store_id * user_id Now what I need is to pull only the list of those stores where he's among the top 5 visitors: When I break it down - this is what I want to accomplish: * Find all stores where this UserA has visited * For each of these stores - see who the top 5 visitors are. * Select the store only if UserA is among the top 5 visitors. The corresponding queries would be: select store_id from shopping where user_id = xxx select user_id,count(*) as 'visits' from shopping where store_id in (select store_id from shopping where user_id = xxx) group by user_id order by visits desc limit 5 Now I need to check in this resultset if UserA is present and select that store only if he's present. For example if he has visited a store 5 times - but if there are 5 or more people who have visited that store more than 5 times - then that store should not be selected. So I'm kind of lost here. Thanks for your help

    Read the article

  • Passive and active sockets

    - by davsan
    Quoting from this socket tutorial: Sockets come in two primary flavors. An active socket is con­nect­ed to a remote active socket via an open data con­nec­tion... A passive socket is not con­nect­ed, but rather awaits an in­com­ing con­nec­tion, which will spawn a new active socket once a con­nec­tion is es­tab­lished ... Each port can have a single passive socket binded to it, await­ing in­com­ing con­nec­tions, and mul­ti­ple active sockets, each cor­re­spond­ing to an open con­nec­tion on the port. It's as if the factory worker is waiting for new mes­sages to arrive (he rep­re­sents the passive socket), and when one message arrives from a new sender, he ini­ti­ates a cor­re­spon­dence (a con­nec­tion) with them by del­e­gat­ing someone else (an active socket) to ac­tu­al­ly read the packet and respond back to the sender if nec­es­sary. This permits the factory worker to be free to receive new packets. ... Then the tutorial explains that, after a connection is established, the active socket continues receiving data until there are no remaining bytes, and then closes the connection. What I didn't understand is this: Suppose there's an incoming connection to the port, and the sender wants to send some little data every 20 minutes. If the active socket closes the connection when there are no remaining bytes, does the sender have to reconnect to the port every time it wants to send data? How do we persist a once established connection for a longer time? Can you tell me what I'm missing here? My second question is, who determines the limit of the concurrently working active sockets?

    Read the article

  • Grabbing rows from MySql where current date is in between start date and end date (Check if current date lies between start date and end date)

    - by Jordan Parker
    I'm trying to select from the database to get the "campaigns" that dates fall into the month. So far i've been successful in grabbing rows that starts or ends inside the current month. What I need to do now is select rows that start in one month and ends a few months down the line ( EG: It's the 3rd month in the year, and there's a "campaign" that runs from the 1st month until the 5th. other example There is a "campaign" that runs from 2012 until 2013 ) I'm hoping there is some way to select via MySql all rows in which a capaign may run. If not should I grab all data in the database and only show the ones that run via the current month. I have already made a function that displays all the days inbetween each date inside an array, which is called "dateRange". I've also created another which shows how many days the campaign runs for called "runTime". Select all (Obviously) $result = mysql_query("SELECT * FROM campaign"); Select Starting This Month $result = mysql_query("SELECT * FROM campaign WHERE YEAR( START ) = YEAR( CURDATE( ) ) AND MONTH( START ) = MONTH( CURDATE( ) )"); Select Ending This Month $result = mysql_query("SELECT * FROM campaign WHERE YEAR( END ) = YEAR( CURDATE( ) ) AND MONTH( END ) = MONTH( CURDATE( ) ) LIMIT 0 , 30"); Code sample while($row = mysql_fetch_array($result)) { $dateArray = dateRange($row['start'], $row['end']); echo "<h3>" . $row['campname'] . "</h3> Start " . $row['start'] . "<br /> End " . $row['end']; echo runTime($row['start'], $row['end']); print_r($dateArray); } In regards to the dates, MySql database only holds start date and end date of the campaign.

    Read the article

  • NetworkStream.Read delay .Net

    - by Gilbes
    I have a class that inherits from TcpClient. In that class I have a method to process responses. In that method I call I get the NetworkStream with MyBase.GetStream and call Read on it. This works fine, excpet the first call to read blocks too long. And by too long I mean that the socket has recieved plenty of data, but won't read it until some arbitrary limit is reached. I can see that it has recieved plenty of data using the packet sniffer WireShark. I have set the recieve buffer to small amounts, and very small amounts (like just a few bytes) to no avail. I have done the same with the buffer byte array I pass to the read method, and it still delays. Or to put it another way. I am download 600k. The download takes 5 seconds (at a little over 100k/second connection to the server which makes sense). The initial Read call takes 2-3 seconds and tells me only 256 bytes are availble (256 is the Recieve buffer and the size of the array I read in to). Then magically, the other few hundred thousand bytes can be read in 256 byte chunks in only a few process ticks each. Using a packet sniffer, I know that during those initial 2-3 seconds, the socket has recieved much more than just 256 bytes. My connection wasn't .25k/second for 3 seconds and then 400k for 2 seconds. How do I get the bytes from a socket as they come in?

    Read the article

  • Right way of making muti-site and multi-lingual website on codeigniter

    - by DR.GEWA
    Hi there. Beforehand let me thank you all !! Really guys you help a lot. When I will finish my web site and will have much time on watching how userbase is growing I will come here again and again to answer to another people questions(if I can ) So here is the problem. I made a web-site on CodeIgniter. A social network engine. Something like phpfox, classmates_com or facebook. It's right now somehow not multilingual, So the UI strings are in the view files, and next step will be move them to the language files. I want the user to have ability to change the language. So I assume that in database user will have row "lang_local" which would be by default set to en, and then to any other language he will change . So what is eating my nervs and enery is following. I will make on this engine several demographic social networks,and I would like to manage theese web-sites in centralized manner with one backend . So whenever I would like to make a new web-network, I just add the domain settings install the script in new folder and add it in database sites I see it like this on every table in database like users,comments,messages,categories ,etc I will have a row site_id , and on each query add/update/delete I add a WHERE SITE_ID=XXX and in table sites(site_id,site_name,domain_name) will have all domains , so that in backend I can filter data by website. Is this a good way? What if i will need then to be multiserver, what about load balancing? Who can tell me what would be a right,PROFESSIONAL way? My maximum user limit for a database is something like for start 10.000 in one-two year 100.000users

    Read the article

  • General SQL Server query performance

    - by Kiril
    Hey guys, This might be stupid, but databases are not my thing :) Imagine the following scenario. A user can create a post and other users can reply to his post, thus forming a thread. Everything goes in a single table called Posts. All the posts that form a thread are connected with each other through a generated key called ThreadID. This means that when user #1 creates a new post, a ThreadID is generated, and every reply that follows has a ThreadID pointing to the initial post (created by user #1). What I am trying to do is limit the number of replies to let's say 20 per thread. I'm wondering which of the approaches bellow is faster: 1 I add a new integer column (e.x. Counter) to Posts. After a user replies to the initial post, I update the initial post's Counter field. If it reaches 20 I lock the thread. 2 After a user replies to the initial post, I select all the posts that have the same ThreadID. If this collection has more than 20 items, I lock the thread. For further information: I am using SQL Server database and Linq-to-SQL entity model. I'd be glad if you tell me your opinions on the two approaches or share another, faster approach. Best Regards, Kiril

    Read the article

  • Convert HTML + CSS to PDF with PHP?

    - by cletus
    Ok, I'm now banging my head against a brick wall with this one. I have an HTML (not XHTML) document that renders fine in Firefox 3 and IE 7. It uses fairly basic CSS to style it and renders fine in HTML. I'm now after a way of converting it to PDF. I have tried: DOMPDF: it had huge problems with tables. I factored out my large nested tables and it helped (before it was just consuming up to 128M of memory then dying--thats my limit on memory in php.ini) but it makes a complete mess of tables and doesn't seem to get images. The tables were just basic stuff with some border styles to add some lines at various points; HTML2PDF and HTML2PS: I actually had better luck with this. It rendered some of the images (all the images are Google Chart URLs) and the table formatting was much better but it seemed to have some complexity problem I haven't figured out yet and kept dying with unknown node_type() errors. Not sure where to go from here; and Htmldoc: this seems to work fine on basic HTML but has almost no support for CSS whatsoever so you have to do everything in HTML (I didn't realize it was still 2001 in Htmldoc-land...) so it's useless to me. I tried a Windows app called Html2Pdf Pilot that actually did a pretty decent job but I need something that at a minimum runs on Linux and ideally runs on-demand via PHP on the Webserver. I really can't believe I'm this stuck. Am I missing something?

    Read the article

  • does lucene search function work in large size document?

    - by shaon-fan
    Hi,there I have a problem when do search with lucene. First, in lucene indexing function, it works well to huge size document. such as .pst file, the outlook mail storage. It can build indexing file include all the information of .pst. The only problem is to large sometimes, include very much words. So when i search using lucene, it only can process the front part of this indexing file, if one word come out the back part of the indexing file, it couldn't find this word and no hits in result. But when i separate this indexing file to several parts in stupid way when debugging, and searching every parts, it can work well. So i want to know how to separate indexing file, how much size should be the limit of searching? cheers and wait 4 reply. ++++++++++++++++++++++++++++++++++++++++++++++++++ hi,there, follow Coady siad, i set the length to max 2^31-1. But the search result still can't include what i want. simply, i convert the doc word to string array[] to analyze, one doc word has 79680 words include the space and any symbol. when i search certain word, it just return 300 count, actually it has more than 300 results. The same reason, when i search a word in back part of the doc, it also couldn't find. //////////////set the length idexwriter.SetMaxFieldLength(2147483647); ////////////////////search IndexSearcher searcher = new ndexSearcher(Program.Parameters["INDEX_LOCATION"].ToString()); Hits hits = searcher.Search(query); This is my code, as others same. I found that problem when i need to count every word hits in a doc. So i also found it couldn't search word in back part of doc. pls help me to find, is there any set searcher length somewhere? how u meet this problem.

    Read the article

  • facebook javascript api

    - by ngreenwood6
    I am trying to get my status from facebook using the javascript api. I have the following code: <div id="fb-root"></div> <div id="data"></div> <script src="http://connect.facebook.net/en_US/all.js"></script> <script type="text/javascript"> (function(){ FB.init({ appId : 'SOME ID', status : true, // check login status cookie : true, // enable cookies to allow the server to access the session xfbml : true // parse XFBML }); }); getData(); function getData(){ var query = FB.Data.query('SELECT message FROM status WHERE uid=12345 LIMIT 10'); query.wait(function(rows) { for(i=0;i<rows.length;i++){ document.getElementById('data').innerHTML += 'Your status is ' + rows[i].message + '<br />'; } }); } </script> When i try to get my name it works fine but the status is not working. Can someone please tell me what I am doing wrong because the documentation for this is horrible. And yes I replaced the uid with a fake one and yes i put in my app id because like i said when i try to get my name it works fine. Any help is appreciated.

    Read the article

  • Your favourite C++ Standard Library wrapper functions?

    - by Neil Butterworth
    This question, asked this morning, made me wonder which features you think are missing from the C++ Standard Library, and how you have gone about filling the gaps with wrapper functions. For example, my own utility library has this function for vector append: template <class T> std::vector<T> & operator += ( std::vector<T> & v1, const std::vector <T> v2 ) { v1.insert( v1.end(), v2.begin(), v2.end() ); return v1; } and this one for clearing (more or less) any type - particularly useful for things like std::stack: template <class C> void Clear( C & c ) { c = C(); } I have a few more, but I'm interested in which ones you use? Please limit answers to wrapper functions - i.e. no more than a couple of lines of code.

    Read the article

  • problem in fetching data from several tables in one query

    - by Mac Taylor
    hey guys in an attempt to union my querries into one query to database , now im in need of geting username of first poster and last poster of a topic in my forums here is my code to do as i told :: $result = $db->sql_query("SELECT t.*,p.*,u.* SUM(t.topic_approved='1') AS Amount_Of_Topics, SUM(p.post_approved ='1') AS Amount_Of_Posts FROM bb3topics t, bb3posts p, bb3users u GROUP BY t.topic_last_post_id ORDER BY t.topic_last_post_id DESC LIMIT 10 " ); while( $row = $db->sql_fetchrow($result) ) { $Amount_Of_Topics = $row['Amount_Of_Topics']; $Amount_Of_Posts = $row['Amount_Of_Posts']; $Amount_Of_Topic_Replies = $Amount_Of_Topic_Replies + $row['topic_replies']; $Amount_Of_Topic_Views = $Amount_Of_Topic_Views + $row['topic_views']; $topic_id = $row['topic_id']; $forum_id = $row['forum_id']; $topic_last_post_id = $row['topic_last_post_id']; $topic_title = $row['topic_title']; $topic_poster = $row['topic_poster']; $topic_views = $row['topic_views']; $topic_replies = $row['topic_replies']; $topic_moved_id = $row['topic_moved_id']; $topic_time = $row['topic_time']; $result2 = $db->sql_query( "SELECT topic_id, poster_id, post_time FROM bb3posts where post_id = '$topic_last_post_id'" ); list( $topic_id, $poster_id, $post_time ) = $db->sql_fetchrow( $result2 ); $result3 = $db->sql_query( "SELECT username, user_id FROM bb3users where user_id='$poster_id'" ); list( $uname, $uid ) = $db->sql_fetchrow( $result3 ); $LastPoster = "$uname"; $result4 = $db->sql_query( "SELECT username, user_id FROM bb3users where user_id='$topic_poster'" ); list( $uname, $uid ) = $db->sql_fetchrow( $result4 ); $OrigPoster = "$uname"; now i need to query all this together not in separated ones i tried using left join but didn't worked what mysql conjunction should i use ?!

    Read the article

  • python can't start a new thread

    - by Giorgos Komnino
    I am building a multi threading application. I have setup a threadPool. [ A Queue of size N and N Workers that get data from the queue] When all tasks are done I use tasks.join() where tasks is the queue . The application seems to run smoothly until suddently at some point (after 20 minutes in example) it terminates with the error thread.error: can't start new thread Any ideas? Edit: The threads are daemon Threads and the code is like: while True: t0 = time.time() keyword_statuses = DBSession.query(KeywordStatus).filter(KeywordStatus.status==0).options(joinedload(KeywordStatus.keyword)).with_lockmode("update").limit(100) if keyword_statuses.count() == 0: DBSession.commit() break for kw_status in keyword_statuses: kw_status.status = 1 DBSession.commit() t0 = time.time() w = SWorker(threads_no=32, network_server='http://192.168.1.242:8180/', keywords=keyword_statuses, cities=cities, saver=MySqlRawSave(DBSession), loglevel='debug') w.work() print 'finished' When the daemon threads are killed? When the application finishes or when the work() finishes? Look at the thread pool and the worker (it's from a recipe ) from Queue import Queue from threading import Thread, Event, current_thread import time event = Event() class Worker(Thread): """Thread executing tasks from a given tasks queue""" def __init__(self, tasks): Thread.__init__(self) self.tasks = tasks self.daemon = True self.start() def run(self): '''Start processing tasks from the queue''' while True: event.wait() #time.sleep(0.1) try: func, args, callback = self.tasks.get() except Exception, e: print str(e) return else: if callback is None: func(args) else: callback(func(args)) self.tasks.task_done() class ThreadPool: """Pool of threads consuming tasks from a queue""" def __init__(self, num_threads): self.tasks = Queue(num_threads) for _ in range(num_threads): Worker(self.tasks) def add_task(self, func, args=None, callback=None): ''''Add a task to the queue''' self.tasks.put((func, args, callback)) def wait_completion(self): '''Wait for completion of all the tasks in the queue''' self.tasks.join() def broadcast_block_event(self): '''blocks running threads''' event.clear() def broadcast_unblock_event(self): '''unblocks running threads''' event.set() def get_event(self): '''returns the event object''' return event

    Read the article

  • Targeted Simplify in Mathematica

    - by Timo
    I generate very long and complex analytic expressions of the general form: (...something not so complex...)(...ditto...)(...ditto...)...lots... When I try to use Simplify, Mathematica grinds to a halt, I am assuming due to the fact that it tries to expand the brackets and or simplify across different brackets. The brackets, while containing long expressions, are easily simplified by Mathematica on their own. Is there some way I can limit the scope of Simplify to a single bracket at a time? Edit: Some additional info and progress. So using the advice from you guys I have now started using something in the vein of In[1]:= trouble = Log[(x + I y) (x - I y) + Sqrt[(a + I b) (a - I b)]]; In[2]:= Replace[trouble, form_ /; (Head[form] == Times) :> Simplify[form],{3}] Out[2]= Log[Sqrt[a^2 + b^2] + (x - I y) (x + I y)] Changing Times to an appropriate head like Plus or Power makes it possible to target the simplification quite accurately. The problem / question that remains, though, is the following: Simplify will still descend deeper than the level specified to Replace, e.g. In[3]:= Replace[trouble, form_ /; (Head[form] == Plus) :> Simplify[form], {1}] Out[3]= Log[Sqrt[a^2 + b^2] + x^2 + y^2] simplifies the square root as well. My plan was to iteratively use Replace from the bottom up one level at a time, but this clearly will result in vast amount of repeated work by Simplify and ultimately result in the exact same bogging down of Mathematica I experienced in the outset. Is there a way to restrict Simplify to a certain level(s)? I realize that this sort of restriction may not produce optimal results, but the idea here is getting something that is "good enough".

    Read the article

  • Cairo / GTK example code crashes when window is too big or maximized

    - by user1890673
    I have copied and compiled the source code available in the section titled "Full Source". http://cairographics.org/threaded_animation_with_cairo/ I adapted this code to a project that I'm working on only to find that the app would crash when I made the window too big. Going back to the original example code, it too crashes when the window is too big ( 1000x1000 or so). I narrowed down in the example that this line appears to be responsible: pixmap = gdk_pixmap_new(window-window,500,500,-1); Where pixmap is of type GdkPixmap*. Resizing the window overwrites pixmap with a new pixmap that is the size of the window. I am doing this in Eclipse Juno in Windows Vista, 32-bit. My compiler is MinGW version 0.5-beta-20120426-1. My GTK+ version is 2.24.10 and apparently Cairo is 1.10.2 I added all of the includes and libraries for GTK and also added compiler switch -mms-bitfields. Is there a limit to the size of a pixmap or something? I'm just starting with GTK with examples so I'm not sure where to go if this example doesn't work.

    Read the article

  • acts_as_xapian jobs table

    - by Grnbeagle
    Hi, Can someone explain to me the inner workings of acts_as_xapian_jobs table? I ran into an issue with the acts_as_xapian plugin recently, where I kept getting the following error when it creates an object with xapian indexed fields: Mysql::Error: Duplicate entry 'String-2147483647' for key 2: INSERT INTO `acts_as_xapian_jobs` (`action`, `model`, `model_id`) VALUES ('update', 'String', 23730251831560) It turns out the model_id exceeded the max int value of 2147483647. The workaround was to update model_id to use bigint. Why would the model_id be so huge? By looking at content of acts_as_xapian_jobs, it seems it creates a row for every field that is being indexed.. Understanding how a job gets created in the table would help a great deal. Here's a sampling of the table: mysql> select * from acts_as_xapian_jobs limit 5\G *************************** 1. row *************************** id: 19 model: String model_id: 23804037900560 action: update *************************** 2. row *************************** id: 49 model: String model_id: 23804037191200 action: update *************************** 3. row *************************** id: 79 model: String model_id: 23804037932180 action: update *************************** 4. row *************************** id: 109 model: String model_id: 23804037101700 action: update *************************** 5. row *************************** id: 139 model: String model_id: 23804037722160 action: update Thanks in advance, Amie

    Read the article

  • Querying the Datastore in python

    - by Ray
    Greetings! I am trying to work with a single column in the datatstore, I can view and display the contents, like this - q = test.all() q.filter("adjclose =", "adjclose") q = db.GqlQuery("SELECT * FROM test") results = q.fetch(5) for p in results: p1 = p.adjclose print "The value is --> %f" % (p.adjclose) however i need to calculate the historical values with the adjclose column, and I am not able to get over the errors for c in range(len(p1)-1): TypeError: object of type 'float' has no len() here is my code! for c in range(len(p1)-1): p1.append(p1[c+1]-p1[c]/p1[c]) p2 = (p1[c+1]-p1[c]/p1[c]) print "the p1 value<-- %f" % (p2) print "dfd %f" %(p1) new to python, any help will be greatly appreciated! thanks in advance Ray HERE IS THE COMPLETE CODE class CalHandler(webapp.RequestHandler): def get(self): que = db.GqlQuery("SELECT * from test") user_list = que.fetch(limit=100) doRender( self, 'memberscreen2.htm', {'user_list': user_list} ) q = test.all() q.filter("adjclose =", "adjclose") q = db.GqlQuery("SELECT * FROM test") results = q.fetch(5) for p in results: p1 = p.adjclose print "The value is --> %f" % (p.adjclose) for c in range(len(p1)-1): p1.append(p1[c+1]-p1[c]/p1[c]) print "the p1 value<--> %f" % (p2) print "dfd %f" %(p1)

    Read the article

  • How to restrict a content of string to less than 4MB and save that string in DB using C#

    - by Pranay B
    I'm working on a project where I need to get the Text data from pdf files and dump the whole text in a DB column. With the help of iTextsharp, I got the data and referred it String. But now I need to check whether the string exceeds the 4MB limit or not and if it is exceeding then accept the string data which is less than 4MB in size. This is my code: internal string ReadPdfFiles() { // variable to store file path string filePath = null; // open dialog box to select file OpenFileDialog file = new OpenFileDialog(); // dilog box title name file.Title = "Select Pdf File"; //files to be accepted by the user. file.Filter = "Pdf file (*.pdf)|*.pdf|All files (*.*)|*.*"; // set initial directory of computer system file.InitialDirectory = Environment.GetFolderPath(Environment.SpecialFolder.Desktop); // set restore directory file.RestoreDirectory = true; // execute if block when dialog result box click ok button if (file.ShowDialog() == DialogResult.OK) { // store selected file path filePath = file.FileName.ToString(); } //file path /// use a string array and pass all the pdf for searching //String filePath = @"D:\Pranay\Documentation\Working on SSAS.pdf"; try { //creating an instance of PdfReader class using (PdfReader reader = new PdfReader(filePath)) { //creating an instance of StringBuilder class StringBuilder text = new StringBuilder(); //use loop to specify how many pages to read. //I started from 5th page as Piyush told for (int i = 5; i <= reader.NumberOfPages; i++) { //Read the pdf text.Append(PdfTextExtractor.GetTextFromPage(reader, i)); }//end of for(i) int k = 4096000; //Test whether the string exceeds the 4MB if (text.Length < k) { //return the string text1 = text.ToString(); } //end of if } //end of using } //end try catch (Exception ex) { MessageBox.Show(ex.Message, "Please Do select a pdf file!!", MessageBoxButtons.OK, MessageBoxIcon.Warning); } //end of catch return text1; } //end of ReadPdfFiles() method Do help me!

    Read the article

  • ConcurrentLinkedQueue$Node remains in heap after remove()

    - by action8
    I have a multithreaded app writing and reading a ConcurrentLinkedQueue, which is conceptually used to back entries in a list/table. I originally used a ConcurrentHashMap for this, which worked well. A new requirement required tracking the order entries came in, so they could be removed in oldest first order, depending on some conditions. ConcurrentLinkedQueue appeared to be a good choice, and functionally it works well. A configurable amount of entries are held in memory, and when a new entry is offered when the limit is reached, the queue is searched in oldest-first order for one that can be removed. Certain entries are not to be removed by the system and wait for client interaction. What appears to be happening is I have an entry at the front of the queue that occurred, say 100K entries ago. The queue appears to have the limited number of configured entries (size() == 100), but when profiling, I found that there were ~100K ConcurrentLinkedQueue$Node objects in memory. This appears to be by design, just glancing at the source for ConcurrentLinkedQueue, a remove merely removes the reference to the object being stored but leaves the linked list in place for iteration. Finally my question: Is there a "better" lazy way to handle a collection of this nature? I love the speed of the ConcurrentLinkedQueue, I just cant afford the unbounded leak that appears to be possible in this case. If not, it seems like I'd have to create a second structure to track order and may have the same issues, plus a synchronization concern.

    Read the article

  • Is there a good digraph layout library callable from C++?

    - by Steve314
    The digraphs represent finite automata. Up until now my test program has been writing out dot files for testing. This is pretty good both for regression testing (keep the verified output files in subversion, ask it if there has been a change) and for visualisation. However, there are some problems... Basically, I want something callable from C++ and which plans a layout for my states and transitions but leaves the drawing to me - something that will allow me to draw things however I want and draw on GUI (wxWidgets) windows. I also want a license which will allow commercial use - I don't need that at present, and I may very well release as open source, but I don't want to limit my options ATM. The problems with GraphViz are (1) the warnings about building from source on Windows, (2) all the unnecessary dependencies for rendering and parsing, and (3) the (presumed) lack of a documented API specifically and purely for layout. Basically, I want to be able to specify my states (with bounding rectangle sizes) and transitions, and read out positions for the states and waypoints for each transition, then draw based on those co-ordinates myself. I haven't really figured out how annotations on transitions should be handled, but there should be some kind of provision for specifying bounding-box-sizes for those, associating them with transitions, and reading out positions. Does anyone know of a library that can handle those requirements? I'm not necessarily against implementing something for myself, but in this case I'd rather avoid it if possible.

    Read the article

  • Trying to convert existing production database table columns from enum to VARCHAR (Rails)

    - by dchua
    Hi everyone, I have a problem that needs me to convert my existing live production (I've duplicated the schema on my local development box, don't worry :)) table column types from enums to a string. Background: Basically, a previous developer left my codebase in absolute shit, migration versions are extremely out of date, and apparently he never used it after a certain point of time in development and now that I'm tasked with migrating a rails 1.2.6 app to 2.3.5, I can't get the tests to run properly on 2.3.5 because my table columns have ENUM column types and they convert to :string, :limit = 0 on my schema.rb which creates the problem of an invalid default value when doing a rake db:test:prepare, like in the case of: Mysql::Error: Invalid default value for 'own_vehicle': CREATE TABLE `lifestyles` (`id` int(11) DEFAULT NULL auto_increment PRIMARY KEY, `member_id` int(11) DEFAULT 0 NOT NULL, `own_vehicle` varchar(0) DEFAULT 'Y' NOT NULL, `hobbies` text, `sports` text, `AStar_activities` text, `how_know_IRC` varchar(100), `IRC_referral` varchar(200), `IRC_others` varchar(100), `IRC_rdrive` varchar(30)) ENGINE=InnoDB I'm thinking of writing a migration task that looks through all the database tables for columns with enum and replace it with VARCHAR and I'm wondering if this is the right way to approach this problem. I'm also not very sure how to write it such that it would loop through my database tables and replace all ENUM colum_types with a VARCHAR. References [1] https://rails.lighthouseapp.com/projects/8994/tickets/997-dbschemadump-saves-enum-columns-as-varchar0-on-mysql [2] http://dev.rubyonrails.org/ticket/2832

    Read the article

  • copy rows before updating them to preserve archive in Postgres

    - by punkish
    I am experimenting with creating a table that keeps a version of every row. The idea is to be able to query for how the rows were at any point in time even if the query has JOINs. Consider a system where the primary resource is books, that is, books are queried for, and author info comes along for the ride CREATE TABLE authors ( author_id INTEGER NOT NULL, version INTEGER NOT NULL CHECK (version > 0), author_name TEXT, is_active BOOLEAN DEFAULT '1', modified_on TIMESTAMP DEFAULT CURRENT_TIMESTAMP, PRIMARY KEY (author_id, version) ) INSERT INTO authors (author_id, version, author_name) VALUES (1, 1, 'John'), (2, 1, 'Jack'), (3, 1, 'Ernest'); I would like to be able to update the above like so UPDATE authors SET author_name = 'Jack K' WHERE author_id = 1; and end up with 2, 1, Jack, t, 2012-03-29 21:35:00 2, 2, Jack K, t, 2012-03-29 21:37:40 which I can then query with SELECT author_name, modified_on FROM authors WHERE author_id = 2 AND modified_on < '2012-03-29 21:37:00' ORDER BY version DESC LIMIT 1; to get 2, 1, Jack, t, 2012-03-29 21:35:00 Something like the following doesn't really work CREATE OR REPLACE FUNCTION archive_authors() RETURNS TRIGGER AS $archive_author$ BEGIN IF (TG_OP = 'UPDATE') THEN -- The following fails because author_id,version PK already exists INSERT INTO authors (author_id, version, author_name) VALUES (OLD.author_id, OLD.version, OLD.author_name); UPDATE authors SET version = OLD.version + 1 WHERE author_id = OLD.author_id AND version = OLD.version; RETURN NEW; END IF; RETURN NULL; -- result is ignored since this is an AFTER trigger END; $archive_author$ LANGUAGE plpgsql; CREATE TRIGGER archive_author AFTER UPDATE OR DELETE ON authors FOR EACH ROW EXECUTE PROCEDURE archive_authors(); How can I achieve the above? Or, is there a better way to accomplish this? Ideally, I would prefer to not create a shadow table to store the archived rows.

    Read the article

  • F# Seq.initInfinite giving StackOverflowException

    - by TrueWill
    I'm learning F#, and I am having trouble understanding why this crashes. It's an attempt to solve Project Euler problem 2. let rec fibonacci n = if n = 1 then 1 elif n = 2 then 2 else fibonacci (n - 1) + fibonacci (n - 2) let debugfibonacci n = printfn "CALC: %d" n fibonacci n let isEven n = n % 2 = 0 let isUnderLimit n = n < 55 let getSequence = //[1..30] Seq.initInfinite (fun n -> n) |> Seq.map debugfibonacci |> Seq.filter isEven |> Seq.takeWhile isUnderLimit Seq.iter (fun x -> printfn "%d" x) getSequence The final version would call a sum function (and would have a higher limit than 55), but this is learning code. As is, this gives a StackOverflowException. However, if I comment in the [1..30] and comment out the Seq.initInfinite, I get: CALC: 1 CALC: 2 2 CALC: 3 CALC: 4 CALC: 5 8 CALC: 6 CALC: 7 CALC: 8 34 CALC: 9 CALC: 10 CALC: 11 It appears to be generating items on demand, as I would expect in LINQ. So why does it blow up when used with initInfinite?

    Read the article

  • Yahoo Query Language Problem

    - by Damiano
    Hello everybody! Today, I've started with Yahoo Query Language. I would use it to retrive stocks details, so I'm talking about Yahoo Finance. I think there is a bug on this language. This is my query: select * from yahoo.finance.quoteslist where symbol='@^GSPC' I ALWAYS get 51 results! it's impossible, take a look at: http://it.finance.yahoo.com/q/cp?s=^GSPC There are 500 results! I also tried some paging parameters. select * from yahoo.finance.quoteslist(50,30) where symbol='@^GSPC' (to get from 50 to 80) select * from yahoo.finance.quoteslist(100) where symbol='@^GSPC' (to get the first 100 results) select * from yahoo.finance.quoteslist where symbol='@^GSPC' limit 30 offset 50 but ALWAYS the last stock is: <quote symbol="BBY"> <Symbol>BBY</Symbol> <LastTradePriceOnly>41.03</LastTradePriceOnly> <LastTradeDate>5/7/2010</LastTradeDate> <LastTradeTime>4:00pm</LastTradeTime> <Change>-0.48</Change> <Open>41.35</Open> <DaysHigh>42.35</DaysHigh> <DaysLow>39.60</DaysLow> <Volume>14129531</Volume> </quote> Why do I have this kind of problem? Thank you so much for your support! (P.S. I've tested it on Yahoo YQL console)

    Read the article

< Previous Page | 162 163 164 165 166 167 168 169 170 171 172 173  | Next Page >