Search Results

Search found 108518 results on 4341 pages for 'reading source code'.

Page 153/4341 | < Previous Page | 149 150 151 152 153 154 155 156 157 158 159 160  | Next Page >

  • free open-source linux screenshot & ocr tool

    - by Gryllida
    I'm looking for a tool which would be able to capture a screen region, pass it to OCR and put the result into clipboard. "import ppm:- | gocr -i - | xclip -selection c" works, but gocr is unreliable: simple text on a webpage has errors. It is a clear font but the OCR tool always misses "r" and replaces it with underscore. "import ppm:- | ocrad -i - | xclip -selection c" says "ocrad: maxval 255 in ppm "P6" file." tesseract needs an image file and does not accept piping input to it. xfce4-screenshooter does not do OCR. ABBYY Screenshot Reader is proprietary. tessnet2 is freeware running on a proprietary platform. Google Docs can OCR screenshots in a batch. But my data is confidential and better not put online. Graphical interface solutions would be acceptable for this question, too. There is a number of existing SuperUser questions about OCR. They fall in several categories. Questions just about OCR without the "screenshot taking" part. Open Source OCR for linux Free OCR for Arabic text Looking for recommendations on OCR problem - tabular numeric data Which has better OCR applications: Ubuntu, or Mac/iPad, or Windows? How can I preform OCR from the command line? OCR solution on linux machine from command line (duplicate) Free OCR software OCR for Sanskrit ( OR devanagari) Copy image and paste to OCR (windows) File processing OCR instead of screenshot. Online OCR website for processing an entire pdf file at one time? Practical OCR solution for converting a large book to a digital format? How to extract text with OCR from a PDF on Linux? Batch-OCR many PDFs OCR Image based PDF Copy image and paste to OCR Extract OCR text from Evernote OCR in Word 2013 Replace (OCR) garbled text in PDF? Process files prior to running OCR. How can I make OCR recognize my documents' text better? Tesseract OCR recognition bilingual document. mistakes tolerance level setup OCR for low quality images How do I get the best quality screenshot for OCR (Optical Character Recognition) and what tool would be the best for screenshots? OCR training. Training Tesseract-OCR for english language fonts None of them answer this question.

    Read the article

  • Detecting source of memory usage on a Linux box

    - by apeace
    I have a toy Linux box with 256mb RAM running Ubuntu 10.04.1 LTS. Here is the output of free -m: total used free shared buffers cached Mem: 245 122 122 0 19 64 -/+ buffers/cache: 38 206 Swap: 511 0 511 Unless I'm reading this wrong, 122mb is being used and only 84mb of that is disk cache. Here are all processes I'm running sorted by memory usage (ps -eo pmem,pcpu,rss,vsize,args | sort -k 1 -r): %MEM %CPU RSS VSZ COMMAND 5.0 0.0 12648 633140 node /home/node/main/sites.js 1.5 0.0 3884 251736 /usr/sbin/console-kit-daemon --no-daemon 1.3 0.0 3328 77108 sshd: apeace [priv] 0.9 0.0 2344 19624 -bash 0.7 0.0 1776 23620 /sbin/init 0.6 0.0 1624 77108 sshd: apeace@pts/0 0.6 0.0 1544 9940 redis-server /etc/redis/redis.conf 0.6 0.0 1524 25848 /usr/sbin/ntpd -p /var/run/ntpd.pid -g -u 103:105 0.5 0.0 1324 119880 rsyslogd -c4 0.4 0.0 1084 49308 /usr/sbin/sshd 0.4 0.0 1028 44376 /usr/sbin/exim4 -bd -q30m 0.3 0.0 904 6876 ps -eo pmem,pcpu,rss,vsize,args 0.3 0.0 888 21124 cron 0.3 0.0 868 23472 dbus-daemon --system --fork 0.2 0.0 732 19624 -bash 0.2 0.0 628 6128 /sbin/getty -8 38400 tty1 0.2 0.0 628 16952 upstart-udev-bridge --daemon 0.2 0.0 564 16800 udevd --daemon 0.2 0.0 552 16796 udevd --daemon 0.2 0.0 548 16796 udevd --daemon 0.0 0.0 0 0 [xenwatch] 0.0 0.0 0 0 [xenbus] 0.0 0.0 0 0 [sync_supers] 0.0 0.0 0 0 [netns] 0.0 0.0 0 0 [migration/3] 0.0 0.0 0 0 [migration/2] 0.0 0.0 0 0 [migration/1] 0.0 0.0 0 0 [migration/0] 0.0 0.0 0 0 [kthreadd] 0.0 0.0 0 0 [kswapd0] 0.0 0.0 0 0 [kstriped] 0.0 0.0 0 0 [ksoftirqd/3] 0.0 0.0 0 0 [ksoftirqd/2] 0.0 0.0 0 0 [ksoftirqd/1] 0.0 0.0 0 0 [ksoftirqd/0] 0.0 0.0 0 0 [ksnapd] 0.0 0.0 0 0 [kseriod] 0.0 0.0 0 0 [kjournald] 0.0 0.0 0 0 [khvcd] 0.0 0.0 0 0 [khelper] 0.0 0.0 0 0 [kblockd/3] 0.0 0.0 0 0 [kblockd/2] 0.0 0.0 0 0 [kblockd/1] 0.0 0.0 0 0 [kblockd/0] 0.0 0.0 0 0 [flush-202:1] 0.0 0.0 0 0 [events/3] 0.0 0.0 0 0 [events/2] 0.0 0.0 0 0 [events/1] 0.0 0.0 0 0 [events/0] 0.0 0.0 0 0 [crypto/3] 0.0 0.0 0 0 [crypto/2] 0.0 0.0 0 0 [crypto/1] 0.0 0.0 0 0 [crypto/0] 0.0 0.0 0 0 [cpuset] 0.0 0.0 0 0 [bdi-default] 0.0 0.0 0 0 [async/mgr] 0.0 0.0 0 0 [aio/3] 0.0 0.0 0 0 [aio/2] 0.0 0.0 0 0 [aio/1] 0.0 0.0 0 0 [aio/0] Now, I know that ps is not the best for viewing process memory usage, but that's because it tends to report more memory than is actually being used...meaning no matter how you look at it all my processes combined shouldn't be using near 122mb, even if you account for the disk cache. What's more, memory usage is growing all the time. I've had to restart my server once a week, because once my 256mb fills up it starts swapping, which it wouldn't do just for disk cache. Shouldn't there be some way for me to see the culprit?! I'm new to server admin, so please if there's something obvious I'm overlooking point it out to me. Just for good measure, the output of cat /proc/meminfo: MemTotal: 251140 kB MemFree: 124604 kB Buffers: 20536 kB Cached: 66136 kB SwapCached: 0 kB Active: 65004 kB Inactive: 37576 kB Active(anon): 15932 kB Inactive(anon): 164 kB Active(file): 49072 kB Inactive(file): 37412 kB Unevictable: 0 kB Mlocked: 0 kB SwapTotal: 524284 kB SwapFree: 524284 kB Dirty: 8 kB Writeback: 0 kB AnonPages: 15916 kB Mapped: 10668 kB Shmem: 188 kB Slab: 18604 kB SReclaimable: 10088 kB SUnreclaim: 8516 kB KernelStack: 536 kB PageTables: 1444 kB NFS_Unstable: 0 kB Bounce: 0 kB WritebackTmp: 0 kB CommitLimit: 649852 kB Committed_AS: 64224 kB VmallocTotal: 34359738367 kB VmallocUsed: 752 kB VmallocChunk: 34359737600 kB DirectMap4k: 262144 kB DirectMap2M: 0 kB EDIT: I had misinterpreted the meaning of free -m at first. But even so: the important thing is that my OS eventually begins to use swap memory if I don't restart my server, which disk caching wouldn't do. So where do I look to see what is using all this memory?

    Read the article

  • Nginx Password Protect Directory Downloads Source Code

    - by Pamela
    I'm trying to password protect a WordPress login page on my Nginx server. When I navigate to http://www.example.com/wp-login.php, this brings up the "Authentication Required" prompt (not the WordPress login page) for a username and password. However, when I input the correct credentials, it downloads the PHP source code (wp-login.php) instead of showing the WordPress login page. Permission for my htpasswd file is set to 644. Here are the directives in question within the server block of my website's configuration file: location ^~ /wp-login.php { auth_basic "Restricted Area"; auth_basic_user_file htpasswd; } Alternately, here are the entire contents of my configuration file (including the above four lines): server { listen *:80; server_name domain.com www.domain.com; root /var/www/domain.com/web; index index.html index.htm index.php index.cgi index.pl index.xhtml; error_log /var/log/ispconfig/httpd/domain.com/error.log; access_log /var/log/ispconfig/httpd/domain.com/access.log combine$ location ~ /\. { deny all; access_log off; log_not_found off; } location = /favicon.ico { log_not_found off; access_log off; } location = /robots.txt { allow all; log_not_found off; access_log off; } location /stats/ { index index.html index.php; auth_basic "Members Only"; auth_basic_user_file /var/www/web/stats/.htp$ } location ^~ /awstats-icon { alias /usr/share/awstats/icon; } location ~ \.php$ { try_files /b371b8bbf0b595046a2ef9ac5309a1c0.htm @php; } location @php { try_files $uri =404; include /etc/nginx/fastcgi_params; fastcgi_pass unix:/var/lib/php5-fpm/web11.sock; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_intercept_errors on; } location / { try_files $uri $uri/ /index.php?$args; client_max_body_size 64M; } location ^~ /wp-login.php { auth_basic "Restricted Area"; auth_basic_user_file htpasswd; } } If it makes any difference, I'm using Ubuntu 14.04.1 LTS with Nginx 1.4.6 and ISPConfig 3.0.5.4p3.

    Read the article

  • WSUS is not using Akamai CDN for syncronisation source

    - by Geekman
    I've just installed a WSUS onto our network, and I'm currently doing the initial sync. I've found that WSUS does not seem to be talking to an Akamai cache, but rather with MS directly. This is contrary to what I've always thought regarding Windows Update traffic. Tcpdump of our WSUS server doing initial sync... As you can see it's speaking with 65.55.194.221. For me to speak to this IP, I have to go over international transit links. Which is of course not ideal. 8:42:31.279757 IP 65.55.194.221.https > XXXX.XXXX.XXXX.XXXX.50888: Flags [.], seq 4379374:4380834, ack 289611, win 256, length 1460 18:42:31.279759 IP 65.55.194.221.https > XXXX.XXXX.XXXX.XXXX.50888: Flags [.], seq 4380834:4382294, ack 289611, win 256, length 1460 18:42:31.279762 IP 65.55.194.221.https > XXXX.XXXX.XXXX.XXXX.50888: Flags [.], seq 4382294:4383754, ack 289611, win 256, length 1460 18:42:31.279764 IP 65.55.194.221.https > XXXX.XXXX.XXXX.XXXX.50888: Flags [P.], seq 4383754:4384144, ack 289611, win 256, length 390 18:42:31.279793 IP XXXX.XXXX.XXXX.XXXX.50888 > 65.55.194.221.https: Flags [.], ack 4369154, win 23884, length 0 18:42:31.279888 IP XXXX.XXXX.XXXX.XXXX.50888 > 65.55.194.221.https: Flags [.], ack 4377914, win 23884, length 0 18:42:31.280015 IP XXXX.XXXX.XXXX.XXXX.50888 > 65.55.194.221.https: Flags [.], ack 4384144, win 23884, length 0 And yet, if I ping download.windowsupdate.com it seems to resolve to a local (national) Akamai node, just fine: root@some-node:~# ping download.windowsupdate.com PING a26.ms.akamai.net (210.9.88.48) 56(84) bytes of data. 64 bytes from a210-9-88-48.deploy.akamaitechnologies.com (210.9.88.48): icmp_req=1 ttl=59 time=1.02 ms 64 bytes from a210-9-88-48.deploy.akamaitechnologies.com (210.9.88.48): icmp_req=2 ttl=59 time=1.10 ms Why is this? And how can I change that (if possible)? I know that I can manually specify a WSUS source to sync with instead of pick the default MS Update like I currently have... But it seems like I shouldn't have to do this. NOTE: I've haven't confirmed if a WUA speaks with Akamai, just looking at WSUS as all WUAs will use our internal WSUS from now on. We'll be looking to join an IX shortly with the hopes of peering with an Akamai cache and have very fast access to Windows Updates. Before I let this drive my motivations for an IX at all I want to first confirm it's actually possible for WSUS to speak with an Akamai cache. I know this is somewhat networking related, but I feel like it has more to do with WSUS than anything, so someone who knows WSUS better than me will likely be able to figure this out.

    Read the article

  • open source knowledge base CMS system

    - by Thomi
    I'm looking for an open source knowledge base system that uses tags, rather than free-text search to identify articles (a lot like serverfault does). I've looked at twiki, which many people suggested, but haven't found what I'm looking for. Basically I want to be able to create and tag articles, and provide an easy way for anonymous users to search based on tags. Edit: OK, here's some more detail regarding what I want. Basically, all the knowledge base systems I have seen so far are a collection of articles, each article with a title. Most of them allow you to categorise articles into groups and sub-groups. Users of the system can search for information using a title search, for example "How do I print from AwesomeProduct?" - which then shows a list of any articles that match that search text. This is fine and dandy when your KB is for one version of the software product (the mythical AwesomeProduct ver 1.0). However, the development team then go ahead and create a new version (ver 2.0) that adds many new features and changes some existing features. Now, how do we support both products in the same KB? The Naive method is to copy all articles from 1.0, and update them for 2.0, adding and removing articles in 2.0 as required. We can then add text at the top of every 1.0 article that says: "this articles applies to 1.0 only, to see the 2.0 version, click here" (or something similar) The problem with articles being indexed in the system by title is that it's very hard to filter based on meta-data like version. What happens when we create version 3.0 or 4.0? The end-situation here is that you have a mess of articles. They're hard to search, hard to filter, and even harder to manage. The solution (it seems to me) is to use tags, rather than text as the article index mechanism. So articles can be tagged with a tag representing the software version, topic area etc. etc. Users can then filter based on tag - an example search might be "version_1 printing" - which straight away gives a list of articles with all these tags. So that's what I'm looking for - a KB system that uses tags, rather than text to index many articles. I'm sure I could build something with drupal, but I was hoping for something that worked out-of-the-box.

    Read the article

  • Reading gml in c#

    - by taudorf
    I have a problem with reading some gml files in c#. My files do not have schema or namespaces and looks like file from this question: http://stackoverflow.com/questions/1818147/help-parsing-gml-data-using-c-linq-to-xml only whitout the schema like this: <gml:Polygon srsName='http://www.opengis.net/gml/srs/epsg.xml#4283'> <gml:outerBoundaryIs> <gml:LinearRing> <gml:coord> <gml:X>152.035953</gml:X> <gml:Y>-28.2103190007845</gml:Y> </gml:coord> <gml:coord> <gml:X>152.035957</gml:X> <gml:Y>-28.2102020007845</gml:Y> </gml:coord> <gml:coord> <gml:X>152.034636</gml:X> <gml:Y>-28.2100120007845</gml:Y> </gml:coord> <gml:coord> <gml:X>152.034617</gml:X> <gml:Y>-28.2101390007845</gml:Y> </gml:coord> <gml:coord> <gml:X>152.035953</gml:X> <gml:Y>-28.2103190007845</gml:Y> </gml:coord> </gml:LinearRing> </gml:outerBoundaryIs> </gml:Polygon> When I try to read the document with XDocument.Load method i get an exception saying: 'gml' namespace is not defined. I have a lot of gml files so I do not want to add the schema and namespaces to all my files. Does anybody know how to read my files?

    Read the article

  • Reading excel files with xlrd

    - by snurre
    I'm having problems reading .xls files written by a Perl script which I have no control over. The files contain some formatting and line breaks within cells. filename = '/home/shared/testfile.xls' book = xlrd.open_workbook(filename) sheet = book.sheet_by_index(0) for rowIndex in xrange(1, sheet.nrows): row = sheet.row(rowIndex) This is throwing the following error: _locate_stream(Workbook): seen 0 5 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 20 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 172480= 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 172500 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 3 2 172520 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 173840= 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 173860 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 1 1 173880 1 1 1 1 1 1 1 1 Traceback (most recent call last): File "/home/shared/xlrdtest.py", line 5, in <module> book = xlrd.open_workbook(filename) File "/usr/local/lib/python2.7/site-packages/xlrd/__init__.py", line 443, in open_workbook ragged_rows=ragged_rows, File "/usr/local/lib/python2.7/site-packages/xlrd/book.py", line 84, in open_workbook_xls ragged_rows=ragged_rows, File "/usr/local/lib/python2.7/site-packages/xlrd/book.py", line 616, in biff2_8_load self.mem, self.base, self.stream_len = cd.locate_named_stream(qname) File "/usr/local/lib/python2.7/site-packages/xlrd/compdoc.py", line 393, in locate_named_stream d.tot_size, qname, d.DID+6) File "/usr/local/lib/python2.7/site-packages/xlrd/compdoc.py", line 421, in _locate_stream raise CompDocError("%s corruption: seen[%d] == %d" % (qname, s, self.seen[s])) xlrd.compdoc.CompDocError: Workbook corruption: seen[2] == 4 I'm not able to find any info about CompDocError or Workbook corruption, even less the seen[2] == 4 part.

    Read the article

  • Version control a content management system?

    - by Mike
    I have the following directory structure in the CMS application we have written: /application /modules /cms /filemanager /block /pages /sitemap /youtube /rss /skin /backend /default /css /js /images /frontend /default /css /js /images Application contains code specific to the current CMS implementation, i.e code for this specific cms. Modules contain reusable portions of code that we share across projects, such as libraries to work with youtube or rss feeds. We include these as git submodules, so that we can update the module in any website and push the changes back across all other projects. It makes it really easy to apply a change to our code and distribute it. We wanted to turn the CMS into a module so we get the same benefit - we can run the entire project under source control, then update the cms as required through a git-submodule. We have run into a problem however: the cms requires javascript/images/css in order for it to work correctly. Things we have thought about: We could create 2 submodules, one for cms-skin and one for cms, but this means you cannot "git pull" one version without having some idea of which versions of skin work with which versions of cms. i.e version 1.2.2 CMS might have issues with 1.0.3 CMS-Skin We could add the skin to the cms module but this has the following problems: Skin should be available on the document root, module code shouldn't be, and if it is it should probably be secured via .htaccess It doesn't seem to make any sense bundling assets with php code We could create a symlink between /skin/backend/ to go to /modules/cms/skin but does this cause any security problems, and do we want to require something like a symlink for the application to work? We could create a hook for git or a shell script that copies files from modules/cms/skin to skin/backend when an update occurs, but this means we lose the ability to edit CMS core files in a project then push them back How is this typically done in large scale cms's? How is it possible to get the source code for a cms under version control, work on the application for a client, then update the sourcecode as releases and given by the vendor? How do applications like Magento or Drupal do this?

    Read the article

  • Reading audio with Extended Audio File Services (ExtAudioFileRead)

    - by Paperflyer
    I am working on understanding Core Audio, or rather: Extended Audio File Services Here, I want to use ExtAudioFileRead() to read some audio data from a file. This works fine as long as I use one single huge buffer to store my audio data (that is, one AudioBuffer). As soon as I use more than one AudioBuffer, ExtAudioFileRead() returns the error code -50 ("error in parameter list"). As far as I can figure out, this means that one of the arguments of ExtAudioFileRead() is wrong. Probably the audioBufferList. I can not use one huge buffer because then, dataByteSize would overflow its UInt32-integer range with huge files. Here is the code to create the audioBufferList: AudioBufferList *audioBufferList; audioBufferList = malloc(sizeof(AudioBufferList) + (numBuffers-1)*sizeof(AudioBuffer)); audioBufferList->mNumberBuffers = numBuffers; for (int bufferIdx = 0; bufferIdx<numBuffers; bufferIdx++ ) { audioBufferList->mBuffers[bufferIdx].mNumberChannels = numChannels; audioBufferList->mBuffers[bufferIdx].mDataByteSize = dataByteSize; audioBufferList->mBuffers[bufferIdx].mData = malloc(dataByteSize); } UInt32 numFrames = fileLengthInFrames; error = ExtAudioFileRead(extAudioFileRef, &numFrames, audioBufferList); Do you know what I am doing wrong here?

    Read the article

  • Reading a POP3 server with only TcpClient and StreamWriter/StreamReader

    - by WebDevHobo
    I'm trying to read mails from my live.com account, via the POP3 protocol. I've found the the server is pop3.live.com and the port if 587. I'm not planning on using a pre-made library, I'm using NetworkStream and StreamReader/StreamWriter for the job. I need to figure this out. So, any of the answers given here: http://stackoverflow.com/questions/44383/reading-email-using-pop3-in-c are not usefull. It's part of a larger program, but I made a small test to see if it works. Eitherway, i'm not getting anything. Here's the code I'm using, which I think should be correct. public Program() { string temp = ""; using(TcpClient tc = new TcpClient(new IPEndPoint(IPAddress.Parse("127.0.0.1"),8000))) { tc.Connect("pop3.live.com",587); using(NetworkStream nws = tc.GetStream()) { using(StreamReader sr = new StreamReader(nws)) { using(StreamWriter sw = new StreamWriter(nws)) { sw.WriteLine("USER " + user); sw.Flush(); sw.WriteLine("PASS " + pass); sw.Flush(); sw.WriteLine("LIST"); sw.Flush(); while(temp != ".") { temp += sr.ReadLine(); } } } } } Console.WriteLine(temp); } So, I'm sending from port 8000 on my machine to port 587, the hotmail pop3 port. And I'm getting nothing, and I'm out of ideas.

    Read the article

  • PDCurses TUI C++ Win32 console app - Access violation reading location

    - by Bach
    I have downloaded pdcurses source and was able to successfully include curses.h in my project, linked the pre-compiled library and all good. After few hours of trying out the library, I saw the tuidemo.c in the demos folder, compiled it into an executable and brilliant! exactly what I needed for my project. Now the problem is that it's a C code, and I am working on a C++ project in VS c++ 2008. The files I need are tui.c and tui.h How can I include that C file in my C++ code? I saw few suggestions here but the compiler was not too happy with 100's of warnings and errors. How can I go on including/using that TUI pdcurses includes!? Thanks EDIT: I added extern "C" statement, so my test looks like this now, but I'm getting some other type of error #include <stdio.h> #include <stdlib.h> using namespace std; extern "C" { #include <tui.h> } void sub0(void) { //do nothing } void sub1(void) { //do nothing } int main (int argc, char * const argv[]) { menu MainMenu[] = { { "Asub", sub0, "Go inside first submenu" }, { "Bsub", sub1, "Go inside second submenu" }, { "", (FUNC)0, "" } /* always add this as the last item! */ }; startmenu(MainMenu, "TUI - 'textual user interface' demonstration program"); return 0; } Although it is compiling successfully, it is throwing an Error at runtime, which suggests a bad pointer: 0xC0000005: Access violation reading location 0x021c52f9 at line startmenu(MainMenu, "TUI - 'textual user interface' demonstration program"); Not sure where to go from here. thanks again.

    Read the article

  • Writing String to Stream and reading it back does not work

    - by Binary255
    I want to write a String to a Stream (a MemoryStream in this case) and read the bytes one by one. stringAsStream = new MemoryStream(); UnicodeEncoding uniEncoding = new UnicodeEncoding(); String message = "Message"; stringAsStream.Write(uniEncoding.GetBytes(message), 0, message.Length); Console.WriteLine("This:\t\t" + (char)uniEncoding.GetBytes(message)[0]); Console.WriteLine("Differs from:\t" + (char)stringAsStream.ReadByte()); The (undesired) result I get is: This: M Differs from: ? It looks like it's not being read correctly, as the first char of "Message" is 'M', which works when getting the bytes from the UnicodeEncoding instance but not when reading them back from the stream. What am I doing wrong? The bigger picture: I have an algorithm which will work on the bytes of a Stream, I'd like to be as general as possible and work with any Stream. I'd like to convert an ASCII-String into a MemoryStream, or maybe use another method to be able to work on the String as a Stream. The algorithm in question will work on the bytes of the Stream.

    Read the article

  • Reading Xml with XmlReader in C#

    - by Gloria Huang
    I'm trying to read the following Xml document as fast as I can and let additional classes manage the reading of each sub block. <ApplicationPool><Accounts><Account><NameOfKin></NameOfKin><StatementsAvailable><Statement></Statement></StatementsAvailable></Account></Accounts></ApplicationPool> I can't seem to format the above nicely :( However, I'm trying to use the XmlReader object to read each Account and subsequently the "StatementsAvailable". Do you suggest using XmlReader.Read and check each element and handle it? I've thought of seperating my classes to handle each node properly. So theres an AccountBase class that accepts a XmlReader instance that reads the NameOfKin and several other properties about the account. Then I was wanting to interate through the Statements and let another class fill itself out about the Statement (and subsequently add it to an IList). Thus far I have the "per class" part done by doing XmlReader.ReadElementString() but I can't workout how to tell the pointer to move to the StatementsAvailable element and let me iterate through them and let another class read each of those proeprties. Sounds easy!

    Read the article

  • Timeout reading verity collection - CF8

    - by Gary
    For a long time now I've been having a problem with using the verity search service bundled with ColdFusion 8. The issue is with timeout errors occurring when perfoming any operation on a collection. It's intermittent, and usually occurs after a few operations have been successfully performed. For instance: If I'm adding records to a collection the first, say 15 records, will go through with no problems, but all subsequent records will timeout until the service is rebooted. I'm on a shared server, Windows 2008, 64bit as far as I know. The error I receive is: "An error occurred while performing an operation in the Search Engine library. Error reading collection information.: com.verity.api.administration.ConfigurationException: java.io.IOException: Read timed out" Having spoken to my hosting company, and after doing some research, it's been suggested that the number of collections on a server may cause this issue. I've reduced the amount of collections I use, and there are currently 39 collections on the server. As I'm on a shared server, I have no control over how many collections other customers use, however I've read that the limit is 128 collections, so I don't see why 39 should cause it to become unusable. The collections aren't big, there's maybe around 5,000 records between all of them. Any ideas?

    Read the article

  • Basic Steps in reading Excel files into matlab

    - by user3693727
    >> [NUM,TXT,RAW]=xlsread('C:\Users\Lincoln Wachn\Google Drive\Summer time\Book1') ??? Error using ==> xlsread at 219 XLSREAD unable to open file C:\Users\Lincoln Wachn\Google Drive\Summer time\Book1. File C:\Users\Lincoln Wachn\Google Drive\Summer time\Book1.xls not found. This is the error that I have received when I try to read a simple Excel file into MATLAB. This is a snapshot of the spreadsheet I would like to load in. Could guide me the basic know-how to extract these data? I have looked through the other questions pertaining to reading Excel files into MATLAB, but I am still very confused. I ultimately wish to extract the file below for my project using the same method. The second image shows the data I have to extract which I could not do. Its file type seems to be different, it is comma separated values file which is not xls. Hence, I am also confuse about whether different file type prevents extraction of data. Thanks you for helping(:

    Read the article

  • GZipStream not reading the whole file

    - by Ed
    I have some code that downloads gzipped files, and decompresses them. The problem is, I can't get it to decompress the whole file, it only reads the first 4096 bytes and then about 500 more. Byte[] buffer = new Byte[4096]; int count = 0; FileStream fileInput = new FileStream("input.gzip", FileMode.Open, FileAccess.Read, FileShare.Read); FileStream fileOutput = new FileStream("output.dat", FileMode.Create, FileAccess.Write, FileShare.None); GZipStream gzipStream = new GZipStream(fileInput, CompressionMode.Decompress, true); // Read from gzip steam while ((count = gzipStream.Read(buffer, 0, buffer.Length)) > 0) { // Write to output file fileOutput.Write(buffer, 0, count); } // Close the streams ... I've checked the downloaded file; it's 13MB when compressed, and contains one XML file. I've manually decompressed the XML file, and the content is all there. But when I do it with this code, it only outputs the very beginning of the XML file. Anyone have any ideas why this might be happening?

    Read the article

  • C# Error reading two dates from a binary file

    - by Jamie
    Hi all, When reading two dates from a binary file I'm seeing the error below: "The output char buffer is too small to contain the decoded characters, encoding 'Unicode (UTF-8)' fallback 'System.Text.DecoderReplacementFallback'. Parameter name: chars" My code is below: static DateTime[] ReadDates() { System.IO.FileStream appData = new System.IO.FileStream( appDataFile, System.IO.FileMode.Open, System.IO.FileAccess.Read); List<DateTime> result = new List<DateTime>(); using (System.IO.BinaryReader br = new System.IO.BinaryReader(appData)) { while (br.PeekChar() > 0) { result.Add(new DateTime(br.ReadInt64())); } br.Close(); } return result.ToArray(); } static void WriteDates(IEnumerable<DateTime> dates) { System.IO.FileStream appData = new System.IO.FileStream( appDataFile, System.IO.FileMode.Create, System.IO.FileAccess.Write); List<DateTime> result = new List<DateTime>(); using (System.IO.BinaryWriter bw = new System.IO.BinaryWriter(appData)) { foreach (DateTime date in dates) bw.Write(date.Ticks); bw.Close(); } } What could be the cause? Thanks

    Read the article

  • Sudden issues reading uncompressed video using opencv

    - by JohnSavage
    I have been using a particular pipeline to process video using opencv to encode uncompressed video (fourcc = 0), and opencv python bindings to then open and work on these files. This has been working fine for me on OpenCV 2.3.1a on Ubuntu 11.10 until just a few days ago. For some reason it currently is only allowing me to read the first frame of a given file the first time I open that file. Further frames are not read, and once I touch the file once with my program, it then cannot even read the first frame. More detail: I created the uncompressed video files as follows: out_video.open(out_vid_name, 0, // FOURCC = 0 means record raw fps, Size(640, 480)) Again, these videos worked fine for me until about a week ago. Now, when I try to open one of these I get the following message (from what I think is ffmpeg): Processing video.avi Using network protocols without global network initialization. Please use avformat_network_init(), this will become mandatory later. [avi @ 0x29251e0] parser not found for codec rawvideo, packets or times may be invalid. It reads and displays the first frame fine, but then fails to read the next frame. Then, when I try to run my code on the same video, the capture still opens with the same message as above. However, it cannot even read the very first frame. Here is the code to open the capture: self.capture = cv2.VideoCapture(filename) if not self.capture.isOpened() print "Error: could not open capture" sys.exit() Again, this part is passed without any issue, but then the break happens at: success, rgb = self.capture.read() if not success: print "error: could not read frame" return False This part breaks at the second frame on the first run of the video file, and then on the first frame on subsequent runs. I really don't know where to even begin debugging this. Please help!

    Read the article

  • Generating code -- is there an easy way to get a proper string representation of nullable type?

    - by Cory Larson
    So I'm building an application that is going to do a ton of code generation with both C# and VB output (depending on project settings). I've got a CodeTemplateEngine, with two derived classes VBTemplateEngine and CSharpTemplateEngine. This question regards creating the property signatures based on columns in a database table. Using the IDataReader's GetSchemaTable method I gather the CLR type of the column, such as "System.Int32", and whether it IsNullable. However, I'd like to keep the code simple, and instead of having a property that looks like: public System.Int32? SomeIntegerColumn { get; set; } or public Nullable<System.Int32> SomeIntegerColumn { get; set; }, where the property type would be resolved with this function (from my VBTemplateEngine), public override string ResolveCLRType(bool? isNullable, string runtimeType) { Type type = TypeUtils.ResolveType(runtimeType); if (isNullable.HasValue && isNullable.Value == true && type.IsValueType) { return "System.Nullable(Of " + type.FullName + ")"; // or, for example... return type.FullName + "?"; } else { return type.FullName; } }, I would like to generate a simpler property. I hate the idea of building a Type string from nothing, and I would rather have something like: public int? SomeIntegerColumn { get; set; } Is there anything built-in anywhere, such as in the VBCodeProvider or CSharpCodeProvider classes that would somehow take care of this for me? Or is there a way to get a type alias of int? from a type string like System.Nullable'1[System.Int32]? Thanks!

    Read the article

  • Fork in Perl not working inside a while loop reading from file

    - by Sag
    Hi, I'm running a while loop reading each line in a file, and then fork processes with the data of the line to a child. After N lines I want to wait for the child processes to end and continue with the next N lines, etc. It looks something like this: while ($w=<INP>) { # ignore file header if ($w=~m/^\D/) { next;} # get data from line chomp $w; @ws = split(/\s/,$w); $m = int($ws[0]); $d = int($ws[1]); $h = int($ws[2]); # only for some days in the year if (($m==3)and($d==15) or ($m==4)and($d==21) or ($m==7)and($d==18)) { die "could not fork" unless defined (my $pid = fork); unless ($pid) { some instructions here using $m, $d, $h ... } push @qpid,$pid; # when all processors are busy, wait for child processes if ($#qpid==($procs-1)) { for my $pid (@qpid) { waitpid $pid, 0; } reset 'q'; } } } close INP; This is not working. After the first round of processes I get some PID equal to 0, the @qpid array gets mixed up, and the file starts to get read at (apparently) random places, jumping back and forth. The end result is that most lines in the file get read two or three times. Any ideas? Thanks a lot in advance, S.

    Read the article

  • Qt - reading from a text file

    - by user289175
    Hello world, I have a table view with three columns; I have just passed to write into text file using this code QFile file("/home/hamad/lesson11.txt"); if(!file.open(QIODevice::WriteOnly)) { QMessageBox::information(0,"error",file.errorString()); } QString dd; for(int row=0; row < model->rowCount(); row++) { dd = model->item(row,0)->text() + "," + model->item(row,1)->text() + "," + model->item(row,2)->text(); QTextStream out(&file); out << dd << endl; } But I'm not succeed to read the same file again, I tried this code but I don't know where is the problem in it QFile file("/home/hamad/lesson11.txt"); QTextStream in(&file); QString line = in.readLine(); while(!in.atEnd()) { QStringList fields = line.split(","); model->appendRow(fields); } Any help please ?

    Read the article

  • ERROR_MORE_DATA ---- Reading from Registry

    - by user314749
    I am trying to create an offline registry in memory using the offreg.dll provided in the windows ddk 7 package. You can find out more information on the offreg.dll here: MSDN Currently, while attempting to read a value from an open registry hive / key I receive the following error: 234 or ERROR_MORE_DATA Here is the .h code that contains ORGetValue: DWORD ORAPI ORGetValue ( __in ORHKEY Handle, __in_opt PCWSTR lpSubKey, __in_opt PCWSTR lpValue, __out_opt PDWORD pdwType, __out_bcount_opt(*pcbData) PVOID pvData, __inout_opt PDWORD pcbData ); Here is the code that I am using to pull the data [DllImport("offreg.dll", CharSet = CharSet.Auto, EntryPoint = "ORGetValue", SetLastError = true, CallingConvention = CallingConvention.StdCall)] public static extern uint ORGetValue(IntPtr Handle, string lpSubKey, string lpValue, out uint pdwType, out string pvData, out uint pcbData); IntPtr myHive; IntPtr myKey; string myValue; uint pdwtype; uint pcbdata; uint ret3 = ORGetValue(myKey, "", "DefaultUserName", out pdwtype, out myValue, out pcbdata); The goal is to be able to read myValue as a string. I am not sure if I need to use marshaling... or a second call with an adjusted buffer.. Or really how to adjust the buffer in C#. Any help or pointers would be greatly appreciated. Thank you.

    Read the article

  • reading csv files in scipy/numpy in Python

    - by user248237
    I am having trouble reading a csv file, delimited by tabs, in python. I use the following function: def csv2array(filename, skiprows=0, delimiter='\t', raw_header=False, missing=None, with_header=True): """ Parse a file name into an array. Return the array and additional header lines. By default, parse the header lines into dictionaries, assuming the parameters are numeric, using 'parse_header'. """ f = open(filename, 'r') skipped_rows = [] for n in range(skiprows): header_line = f.readline().strip() if raw_header: skipped_rows.append(header_line) else: skipped_rows.append(parse_header(header_line)) f.close() if missing: data = genfromtxt(filename, dtype=None, names=with_header, deletechars='', skiprows=skiprows, missing=missing) else: if delimiter != '\t': data = genfromtxt(filename, dtype=None, names=with_header, delimiter=delimiter, deletechars='', skiprows=skiprows) else: data = genfromtxt(filename, dtype=None, names=with_header, deletechars='', skiprows=skiprows) if data.ndim == 0: data = array([data.item()]) return (data, skipped_rows) the problem is that genfromtxt complains about my files, e.g. with the error: Line #27100 (got 12 columns instead of 16) I am not sure where these errors come from. Any ideas? Here's an example file that causes the problem: #Gene 120-1 120-3 120-4 30-1 30-3 30-4 C-1 C-2 C-5 genesymbol genedesc ENSMUSG00000000001 7.32 9.5 7.76 7.24 11.35 8.83 6.67 11.35 7.12 Gnai3 guanine nucleotide binding protein alpha ENSMUSG00000000003 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 Pbsn probasin Is there a better way to write a generic csv2array function? thanks.

    Read the article

  • firefox reading web page from local JS file -- access to restricted URI denied, code: 1012, nsresult

    - by macias
    My problem is -- I have a html file which is really JS program, which reads web pages and show them in customized manner (i.e. it displays the same content in a different way). Basically, I create XMLHttpRequest object and then req.open("GET", web_page_address, false); req.send(""); This gives me (in firefox) an error: Error: uncaught exception: [Exception... "Access to restricted URI denied" code: "1012" nsresult: "0x805303f4 (NS_ERROR_DOM_BAD_URI)" I already googled, and looked at SO but all other issues are very similar with those two exceptions: the file I open in firefox is a local file, opened directly in browser -- I don't have www server running at localhost I don't have any control over the web pages I am reading stuff from So, several solutions I've seen so far (like adding PHP proxy, changing the way external server sends data) cannot be applied here. What else can be done in such case? Another story is I am wondering if such strict security for directly local file has any sense. Thank you in advance for tips/links/etc. Have a nice day!

    Read the article

  • Access reading error when using class member variable

    - by bsg
    Hi, I have a class with private member variables declared in a header file. In my constructor, I pass in some filenames and create other objects using those names. This works fine. When I try to add another member variable, however, and initialize it in the constructor, I get an access reading violation. I sent the code to someone else and it works fine on his computer. Any idea what could be wrong? Here is the offending code: The .h file: class QUERYMANAGER { INDEXCACHE *cache; URLTABLE *table; SNIPPET *snip; int* iquery[MAX_QUERY_LENGTH]; int* metapointers[MAX_QUERY_LENGTH]; int blockpointers[MAX_QUERY_LENGTH]; int docpositions[MAX_QUERY_LENGTH]; int numberdocs[MAX_QUERY_LENGTH]; int frequencies[MAX_QUERY_LENGTH]; int docarrays[MAX_QUERY_LENGTH][256]; int qsize; public: QUERYMANAGER(); QUERYMANAGER(char *indexfname, char *btfname, char *urltablefname, char *snippetfname, char *snippetbtfname); ~QUERYMANAGER(); This is the .cpp file: #include "querymanagernew.h" #include "snippet.h" using namespace std; QUERYMANAGER::QUERYMANAGER(char *indexfname, char *btfname, char *urltablefname, char *snippetfname, char *snippetbtfname){ cache = new INDEXCACHE(indexfname, btfname); table = new URLTABLE(urltablefname); snip = new SNIPPET(snippetfname, snippetbtfname); //this is where the error occurs qsize = 0; } I am totally at a loss as to what is causing this - any ideas? Thanks, bsg

    Read the article

< Previous Page | 149 150 151 152 153 154 155 156 157 158 159 160  | Next Page >