Search Results

Search found 6448 results on 258 pages for 'pdf reader'.

Page 113/258 | < Previous Page | 109 110 111 112 113 114 115 116 117 118 119 120  | Next Page >

  • HttpWebRequest timeout in Windows service

    - by googler1
    I am getting a timeout error while starting my Windows service. I am tring to download an XML file from a remote system which causes a timeout during the service OnStart. This is the method I am calling from OnStart: public static StreamReader GetResponseStream() { try { EventLog.WriteEntry("Epo-Service_Retriver", "Trying ...", EventLogEntryType.Information); CookieContainer CC = new CookieContainer(); HttpWebRequest request = (HttpWebRequest)WebRequest.Create( Utils.GetWeeklyPublishedURL()); request.Proxy = null; request.UseDefaultCredentials = true; request.KeepAlive = true; //THIS DOES THE TRICK request.ProtocolVersion = HttpVersion.Version10; // THIS DOES THE TRICK request.CookieContainer = CC; WebResponse response = request.GetResponse(); StreamReader reader = new StreamReader(response.GetResponseStream()); EventLog.WriteEntry("Epo-Service_Retriver", "Connected to Internet...", EventLogEntryType.SuccessAudit); return reader; } } Is there any possibility to avoid this timeout?

    Read the article

  • How to read an XML file with Java?

    - by Yatendra Goel
    I don't need to read complex XML files. I just want to read the following configuration file with a simplest XML reader <config> <db-host>localhost</db-host> <db-port>3306</db-port> <db-username>root</db-username> <db-password>root</db-password> <db-name>cash</db-name> </config> How to read the above XML file with a XML reader through Java?

    Read the article

  • [Java] Send cookie with http request problem

    - by nkr1pt
    I'm trying to get a certain cookie in a java client by creating a series of Http requests. It looks like I'm getting a valid cookie from the server but when I'm sending out a request to the fnal url with the seemingly valid cookie I should get some lines of xml in the response but the response is blank because the cookie isw rong or is invalidated because a session has closed or an other problem which I can't figure out. The cookie handed out by the server expires at the end of the session. It seems to me the cookie is valid because when I do the same calls in firefox, a similar cookie with the same name and starting with the 3 first same letters and of the same length is stored in firefox, also expiring at the end of the session. If I then make a request to the final url with only this particular cookie stored in firefox (removed all other cookies), the xml is nicely rendered on the page. Any ideas about what I am doing wrong in this piece of code? One other thing, when I use the value from the very similar cookie generated and strored in firefox in this piece of code, the last request does give xml feedback in the http response! // Validate url = new URL(URL_VALIDATE); conn = (HttpURLConnection) url.openConnection(); conn.setRequestProperty("Cookie", cookie); conn.connect(); String headerName = null; for (int i = 1; (headerName = conn.getHeaderFieldKey(i)) != null; i++) { if (headerName.equals("Set-Cookie")) { if (conn.getHeaderField(i).startsWith("JSESSIONID")) { cookie = conn.getHeaderField(i).substring(0, conn.getHeaderField(i).indexOf(";")).trim(); } } } // Get the XML url = new URL(URL_XML_TOTALS); conn = (HttpURLConnection) url.openConnection(); conn.setRequestProperty("Cookie", cookie); conn.connect(); // Get the response StringBuffer answer = new StringBuffer(); BufferedReader reader = new BufferedReader(new InputStreamReader(conn.getInputStream())); String line; while ((line = reader.readLine()) != null) { answer.append(line); } reader.close(); //Output the response System.out.println(answer.toString())

    Read the article

  • iphone's nsxmlparser parsing RSS causes encoding problems

    - by Tankista
    Hi, Im working on simle RSS reader. This reader loads data from internet via this code: NSXMLParser *rss = [[NSXMLParser alloc] initWithURL:[NSURL URLWithString:@"http://twitter.com/statuses/user_timeline/50405236.rss"]]; My problem is with encoding. RSS 2.0 file is supposed to be UTF8 encoded according to encoding attribute in XML file. <?xml version="1.0" encoding="utf-8"?> So when I download URLs content I get text truncated after first occurance of char with diacritics, example: l š c t ž ý á í é, etc. I tried to solve the problem by downloading URL as UTF8 string, I used this code: NSString *rssXmlString = [NSString stringWithContentsOfURL: [NSURL URLWithString: @"http://www.macblog.sk/rss.xml"] encoding:NSUTF8StringEncoding error: nil]; NSData *rssXmlData = [rssXmlString dataUsingEncoding: NSUTF8StringEncoding]; Did not help. Thanx for your responses.

    Read the article

  • Unable to browse some pdfs and docs.

    - by JamesEggers
    I have a web site that uses Microsoft Indexing Service to index and query a directory that holds various documents of type pdf, rtf, mht, and doc. The indexing and querying works well (for the most part); however, some files will load while others will not. This is a Windows Server 2003 box running the site using IIS 6. The indexed directory is a sub directory off of the site's root directory (i.e. http://my.domain.com/files/). The file paths are accurate in the URL; however, I can only access some of the files of each file type. The files that I cannot access give a 404 File Not Found. I am able to open all files via windows explorer;however, attempting to open them via a browser over http is hit and miss. Has anyone experienced this issue and know how to resolve it? Anyone have any idea why I could access some files but not others? Does anyone have any recommendations on what to look into to try this (i.e. does owner matter or something like that?)? EDIT: Here is the Request and Response Headers for a bad file: GET /files/file1.pdf HTTP/1.1 Accept: image/gif, image/jpeg, image/pjpeg, image/pjpeg, application/x-shockwave-flash, application/xaml+xml, application/vnd.ms-xpsdocument, application/x-ms-xbap, application/x-ms-application, application/x-silverlight, application/vnd.ms-excel, application/vnd.ms-powerpoint, application/msword, / Accept-Language: en-us User-Agent: Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 5.1; Trident/4.0; .NET CLR 1.1.4322; .NET CLR 2.0.50727; .NET CLR 3.0.04506.30; .NET CLR 3.0.04506.590; .NET CLR 3.0.04506.648; .NET CLR 3.5.21022; .NET CLR 3.0.4506.2152; .NET CLR 3.5.30729) Accept-Encoding: gzip, deflate Proxy-Connection: Keep-Alive Host: my.domain.com HTTP/1.1 404 Not Found Content-Length: 1635 Content-Type: text/html Server: Microsoft-IIS/6.0 X-Powered-By: ASP.NET Date: Mon, 01 Jun 2009 15:38:54 GMT [typical 404 page markup excluded] Here is the Request/Response headers for the good file: GET /files/file2.pdf HTTP/1.1 Accept: image/gif, image/jpeg, image/pjpeg, image/pjpeg, application/x-shockwave-flash, application/xaml+xml, application/vnd.ms-xpsdocument, application/x-ms-xbap, application/x-ms-application, application/x-silverlight, application/vnd.ms-excel, application/vnd.ms-powerpoint, application/msword, / Accept-Language: en-us User-Agent: Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 5.1; Trident/4.0; .NET CLR 1.1.4322; .NET CLR 2.0.50727; .NET CLR 3.0.04506.30; .NET CLR 3.0.04506.590; .NET CLR 3.0.04506.648; .NET CLR 3.5.21022; .NET CLR 3.0.4506.2152; .NET CLR 3.5.30729) Accept-Encoding: gzip, deflate Proxy-Connection: Keep-Alive Host: my.domain.com HTTP/1.1 200 OK Content-Length: 352464 Content-Type: application/pdf Last-Modified: Tue, 13 Jan 2009 15:27:35 GMT Accept-Ranges: bytes ETag: "74ccc5759375c91:2a47" Server: Microsoft-IIS/6.0 X-Powered-By: ASP.NET Date: Mon, 01 Jun 2009 15:50:33 GMT

    Read the article

  • Open Microsoft Publisher Document on Linux

    - by Peter
    I'm pretty sure the options consist of Just don't do it (use a nice open standard file format). Not great when someone sends you something. Translate the format on Windows. I think you need Publisher, the viewer won't even print. But you can download a trial version for a once off (been there, done that). Submit the file for online translation to PDF. www.pdfonline.com/convert-pdf/ Use a Windows VM, wine, crossover office, Win4Lin, or otherwise run Publisher "under" linux. What I really want to do is convert it to something nicer natively under Linux.

    Read the article

  • Create Word files from Excel content

    - by Lennart
    I have an Excel file that I want to split into several files (Word, PDF is also good), based on content. The content is somewhat like this: Person Fase Date Item Text A 1 01-01-2012 Z Lorem ipsum A 2 01-02-2012 X Lorem ipsum B 1 02-01-2012 Y Lorem ipsum C 2 01-01-2012 Z Lorem ipsum I want Word/PDF documents with names like Person_Fase.docx And as content the date, item and text. Idealy in a table layout. Any hints/ clues on how to get there? It's about 700 clients, with up to 300 Excel entries each.

    Read the article

  • adding remote ssh printer as local printer

    - by guest
    I have SSH access to a remote host (FreeBSD) that has a printer set up. I do not have root access on that host or any other special user rights. Now I want to print directly from my laptop on that printer (Ubuntu 10.10). The problem is that I don't know how to "import" or whatever the the printer, as it needs authetification from my user account (print quota limitations). E-mailing me the files I want to print or scp them every time is a pain, ATM I pipe the PostScript output manually to a ssh command, but that's also a huge working overhead. E.g. when I want to print a foo.pdf pdftops '/path/to/foo.pdf' - | ssh user@remotehost 'lpr -P printername' So, does anyone know of a smooth way to shorten this procedure? Ideally I would just want to use a printername instead of the whole ssh command

    Read the article

  • Is there any way to search within OneNote 2007 attachments

    - by jtolle
    I'm starting to use OneNote (2007) more. One thing I'd like to do is take notes on papers I have read. That is, I attach, say, a PDF file, and then type in some notes about it. Sometimes I do other stuff like copy some key text or figures from the paper, so OneNote is great for this because all that plus my own notes plus the file itself can all be in one place. However, the OneNote search doesn't seem to be able to search within said PDF files. Windows search finds things, but just in the OneNote cache, not the actual Onenote .one files. (Presumably that will only work for recently accessed stuff, and in any case doesn't take me to my actual notes.) Is there a way to do what I want? If not, does anyone have a suggestion (or link) as to how to best use OneNote to store (and later search for!) this kind of content and notes?

    Read the article

  • Write a file in UTF-8 using FileWriter (Java)?

    - by user1280970
    I have the following code however, I want it to write as a UTF-8 file to handle foreign characters. Is there a way of doing this, is there some need to have a parameter? I would really appreciate your help with this. Thanks. try { BufferedReader reader = new BufferedReader(new FileReader("C:/Users/Jess/My Documents/actresses.list")); writer = new BufferedWriter(new FileWriter("C:/Users/Jess/My Documents/actressesFormatted.csv")); while( (line = reader.readLine()) != null) { //If the line starts with a tab then we just want to add a movie //using the current actor's name. if(line.length() == 0) continue; else if(line.charAt(0) == '\t') { readMovieLine2(0, line, surname.toString(), forename.toString()); } //Else we've reached a new actor else { readActorName(line); } } } catch (IOException e) { e.printStackTrace(); } }

    Read the article

  • Monotouch/C# version of "stringWithContentsOfUrl"

    - by Pselus
    I'm trying to convert a piece of Objective-C code into C# for use with Monotouch and I have no idea what to use to replace stringWithContentsOfUrl Should I use something like: HttpWebRequest request = (HttpWebRequest) WebRequest.Create("http://www.helephant.com"); HttpWebResponse response = (HttpWebResponse) request.GetResponse(); if (response.StatusCode == HttpStatusCode.OK && response.ContentLength > 0){ TextReader reader = new StreamReader(response.GetResponseStream()); string text = reader.ReadToEnd(); Console.Write(text); } Is that even safe to use in MonoTouch? Will it work for the iPhone?

    Read the article

  • .NET and XML: How can I read nested namespaces (<abc:xyz:name attr="value"/>)?

    - by Entrase
    Suppose there is an element in XML data: <abc:xyz:name attr="value"/> I'm trying to read it with XmlReader. The problem is that I get XmlException that says The ‘:’ character, hexadecimal value 0x3A, cannot be included in a name I have already declared "abc" namespace. I have also tried adding "abc:xyz" and "xyz" namespaces. But this doesn't help at all. I could replace some text before parsing but there may be some more elegant solution. So what should I do? Here is my code: XmlReaderSettings settings = new XmlReaderSettings() NameTable nt = new NameTable(); XmlNamespaceManager nsmgr = new XmlNamespaceManager(nt); nsmgr.AddNamespace("abc", ""); nsmgr.AddNamespace("xyz", ""); XmlParserContext context = new XmlParserContext(null, nsmgr, null, XmlSpace.None); // So this reader can't read <abc:xyz:name attr="value"/> XmlReader reader = XmlReader.Create(path, settings, context);“

    Read the article

  • Okular (on Ubuntu 9.10) prints multiple pages per sheet (n-up) very small

    - by dgleich
    I'm trying to print a set of beamer slides with multiple slides per page (4-up or 6-up). When I select 4 pages or 6 pages per sheet in the Okular print dialog, the pages print quite small (perhaps even tiny -- about 1.75" by 1.25") and leave significant white-space on the page. I can get around this behavior by using the pdfnup utility (in the pdfjam package); which will correctly generate a 4- or 6-up pdf file but it's annoying to generate a second pdf file when I should be able to accomplish this task from the print dialog. Details: Ubuntu 9.10 (Karmic), 64-bit, Color Postscript printer.

    Read the article

  • How can I copy the link in Google without openning the link and the "Google stuff" in the URL? [closed]

    - by John Isaiah Carmona
    I want to copy a link in Google without opening that link and without the "Google stuff". When I use my browser by right-clicking the link and selecting Copy Link Location, it copies a very long link because of the Google stuff. http://www.google.com.ph/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&cad=rja&ved=0CBwQFjAA&url=http%3A%2F%2Fdownload.microsoft.com%2Fdownload%2FC%2F0%2FA%2FC0AEF0CC-B969-406D-989A-4CDAFDBB3F3C%2FWin8_UXG_RTM.pdf&ei=1bWHULCyEZGQiQfl_IGIDA&usg=AFQjCNEtK1uai68ZKixTovFm2bwe7C9LGg&sig2=cPFFl4ARTTr7xHTHcr5k8A I just want the download.microsoft.com/.../C/0/A/.../Win8_UXG_RTM.pdf URL, but I can't see it in my browser even after opening the site with Google.

    Read the article

  • Approaching this case with ABDPDF

    - by Younes
    We have ABCPDF 8 available to work with for this case. We need to rebuild an existing PDF with markup and texts in it with text that comes from a CMS. What we basicly want to do is use an existing PDF and replace blocks of text and images with the ones our content editors specify in Sitecore. I have been looking at the documentation of ABCPDF but it's kind of overwelming at this point, cause it's the first time I'm trying to do anything with dynamically building a PDF. I found that it's possible to read text from an existing PDF document using the .GetText(""); method. This Method will accept 4 parameters and I've tried the SVG one (returns xml). When I load the xml in an XmlDocument I find that alot of textblocks which I assumed to be one block of text is split up in different parts. For example: <text xml:space="preserve" x="215.4312" y="48.9478" font-size="9" font-family="Arial-BoldMT" fill="rgb(237, 106, 0)" textLength="94.032" transform="translate(215.4312, 48.9478) translate(-215.4312, -48.9478)">wijkverpleegkundige?</text> <text xml:space="preserve" x="215.4312" y="61.9438" font-size="9" font-family="ArialMT" textLength="5.652" transform="translate(215.4312, 61.9438) translate(-215.4312, -61.9438)">&#8226;&#9;</text> <text xml:space="preserve" x="223.9362" y="61.9438" font-size="9" font-family="ArialMT" textLength="49.509" transform="translate(223.9362, 61.9438) translate(-223.9362, -61.9438)">Lichamelijke</text> <text xml:space="preserve" x="273.4452" y="61.9438" font-size="9" font-family="ArialMT" textLength="2.502" transform="translate(273.4452, 61.9438) translate(-273.4452, -61.9438)">&#9;</text> <text xml:space="preserve" x="275.9472" y="61.9438" font-size="9" font-family="ArialMT" textLength="32.013" transform="translate(275.9472, 61.9438) translate(-275.9472, -61.9438)">controle</text> <text xml:space="preserve" x="307.9602" y="61.9438" font-size="9" font-family="ArialMT" textLength="2.502" transform="translate(307.9602, 61.9438) translate(-307.9602, -61.9438)">&#9;</text> <text xml:space="preserve" x="310.4622" y="61.9438" font-size="9" font-family="ArialMT" textLength="10.008" transform="translate(310.4622, 61.9438) translate(-310.4622, -61.9438)">op</text> <text xml:space="preserve" x="320.4702" y="61.9438" font-size="9" font-family="ArialMT" textLength="2.502" transform="translate(320.4702, 61.9438) translate(-320.4702, -61.9438)">&#9;</text> <text xml:space="preserve" x="322.9722" y="61.9438" font-size="9" font-family="ArialMT" textLength="42.021" transform="translate(322.9722, 61.9438) translate(-322.9722, -61.9438)">bloeddruk,</text> <text xml:space="preserve" x="364.9932" y="61.9438" font-size="9" font-family="ArialMT" textLength="2.502" transform="translate(364.9932, 61.9438) translate(-364.9932, -61.9438)">&#9;</text> <text xml:space="preserve" x="223.9362" y="74.9398" font-size="9" font-family="ArialMT" transform="translate(223.9362, 74.9398) translate(-223.9362, -74.9398)" My first idea was to get all blocks of text and just replace them with my own text that comes from the CMS, but this doesn't seem to be the way to go. I'm now completely lost and don't know how to approach this issue. Is there any way to get the following XML to be accessible in objects in ABCPDF or am I doing things wrong? What will be the best approach on making this happen?

    Read the article

  • Getting error when compiled Http webrequest

    - by Afnan
    i have written a program to search value from google every thing works fine but first time when page is loaded then i encounter error.after words if i click any link it is working fine no errors further. Code is as follow private void backgroundWorker1_DoWork(object sender, DoWorkEventArgs e) { string raw = "http://www.google.com/search?hl=en&q={0}&aq=f&oq=&aqi=n1g10"; string search = string.Format(raw, HttpUtility.UrlEncode(searchTerm)); //string search = "http://www.whatismyip.com/"; HttpWebRequest request = (HttpWebRequest)WebRequest.Create(search); using (HttpWebResponse response = (HttpWebResponse)request.GetResponse()) { using (StreamReader reader = new StreamReader(response.GetResponseStream(), Encoding.ASCII)) { browserA = reader.ReadToEnd(); this.Invoke(new EventHandler(IE1)); } } }

    Read the article

  • Value cannot be null.Parameter name: key when databind in ASP.NET

    - by Yongwei Xing
    Hi all I am triing to bind the data to a listbox from sql server then got the error "Value cannot be null.Parameter name: key" sqlCommand = "SELECT [Country] FROM [tbl_LookupCountry] where [Country] IS NOT NULL"; SqlConnection sqlConCountry = new SqlConnection(connectString); SqlCommand sqlCommCountry = new SqlCommand(); sqlCommCountry.Connection = sqlConCountry; sqlCommCountry.CommandType = System.Data.CommandType.Text; sqlCommCountry.CommandText = sqlCommand; sqlCommCountry.CommandTimeout = 300; sqlConCountry.Open(); reader = sqlCommCountry.ExecuteReader(); ddlCountry.DataSource = reader; ddlCountry.DataBind(); sqlConCountry.Close(); Does anyone meet this problem before?

    Read the article

  • decoding algorithm wanted

    - by Horace Ho
    I receive encoded PDF files regularly. The encoding works like this: the PDFs can be displayed correctly in Acrobat Reader select all and copy the test via Acrobat Reader and paste in a text editor will show that the content are encoded so, examples are: 13579 -> 3579; hello -> jgnnq it's basically an offset (maybe swap) of ASCII characters. The question is how can I find the offset automatically when I have access to only a few samples. I cannot be sure whether the encoding offset is changed. All I know is some text will usually (if not always) show up, e.g. "Name:", "Summary:", "Total:", inside the PDF. Thank you!

    Read the article

  • Getresponse not working after authentication

    - by Hazler
    For starters, here's my code: // Create a request using a URL that can receive a post. WebRequest request = WebRequest.Create("http://mydomain.com/cms/csharptest.php"); request.Credentials = new NetworkCredential("myUser", "myPass"); // Set the Method property of the request to POST. request.Method = "POST"; // Create POST data and convert it to a byte array. string postData = "name=PersonName&age=25"; byte[] byteArray = Encoding.UTF8.GetBytes(postData); // Set the ContentType property of the WebRequest. request.ContentType = "application/x-www-form-urlencoded"; // Set the ContentLength property of the WebRequest. request.ContentLength = byteArray.Length; // Get the request stream. Stream dataStream = request.GetRequestStream(); // Write the data to the request stream. dataStream.Write(byteArray, 0, byteArray.Length); // Close the Stream object. dataStream.Close(); // Get the response. HttpWebResponse response = (HttpWebResponse)request.GetResponse(); // Display the status. Console.WriteLine((response).StatusDescription); // Get the stream containing content returned by the server. dataStream = response.GetResponseStream(); // Open the stream using a StreamReader for easy access. StreamReader reader = new StreamReader(dataStream); // Read the content. string responseFromServer = reader.ReadToEnd(); // Display the content. Console.WriteLine(responseFromServer); // Clean up the streams. reader.Close(); dataStream.Close(); response.Close(); The directory cms/ requires authentication, but if I try running this same code somewhere, where authentication isn't needed, it works fine. The error (System.Net.WebException: The remote server returned an error: (403) Forbidden) occurs at HttpWebResponse response = (HttpWebResponse)request.GetResponse(); I have managed in reading data after authenticating, but not if I also send POST data. What's wrong with this?

    Read the article

  • Hyperlink to doc file slow opening

    - by mserioli
    I've two excel file with inside some link to .doc and .pdf file. Both excel files and linked files are on a network shared folder. The first excel file is an .xls, the second an .xlsm. While opening link to .pdf file is very fast (the file is open in few seconds) it take a long time to open .doc files (about 40 secs.). I have searched on internet but found no solution at the moment. I have this problem with both excel 2007 and 2010. Does anyone know how to solve this problem? Thanks a lot Marco

    Read the article

  • Nginx, proxy passing to Apache, and SSL

    - by Vic
    I have Nginx and Apache set up with Nginx proxy-passing everything to Apache except static resources. I have a server set up for port 80 like so: server { listen 80; server_name *.example1.com *.example2.com; [...] location ~* \.(?:ico|css|js|gif|jpe?g|png|pdf|te?xt)$ { access_log off; expires max; add_header Pragma public; add_header Cache-Control "public, must-revalidate, proxy-revalidate"; add_header Vary: Accept-Encoding; } location / { proxy_pass http://127.0.0.1:8080; include /etc/nginx/conf.d/proxy.conf; } } And since we have multiple ssl sites (with different ssl certificates) I have a server{} block for each of them like so: server { listen 443 ssl; server_name *.example1.com; [...] location ~* \.(?:ico|css|js|gif|jpe?g|png|pdf|te?xt)$ { access_log off; expires max; add_header Pragma public; add_header Cache-Control "public, must-revalidate, proxy-revalidate"; add_header Vary: Accept-Encoding; } location / { proxy_pass https://127.0.0.1:8443; include /etc/nginx/conf.d/proxy.conf; proxy_set_header X-Forwarded-Port 443; proxy_set_header X-Forwarded-Proto https; } } server { listen 443 ssl; server_name *.example2.com; [...] location ~* \.(?:ico|css|js|gif|jpe?g|png|pdf|te?xt)$ { access_log off; expires max; add_header Pragma public; add_header Cache-Control "public, must-revalidate, proxy-revalidate"; add_header Vary: Accept-Encoding; } location / { proxy_pass https://127.0.0.1:8445; include /etc/nginx/conf.d/proxy.conf; proxy_set_header X-Forwarded-Port 443; proxy_set_header X-Forwarded-Proto https; } } First of all, I think there is a very obvious problem here, which is that I'm double-encrypting everything, first at the nginx level and then again by Apache. To make everything worse, I just started using Amazon's Elastic Load Balancer, so I added the certificate to the ELB and now SSL encryption is happening three times. That's gotta be horrible for performance. What is the sane way to handle this? Should I be forwarding https on the ELB - http on nginx - http on apache? Secondly, there is so much duplication above. Is the best method to not repeat myself to put all of the static asset handling in an include file and just include it in the server?

    Read the article

  • how to open many files simultaneously for reading in c

    - by monkeyking
    I'm trying to port some of my c++ code into c. I have the following construct class reader{ private: FILE *fp; alot_of_data data;//updated by read_until() method public: reader(const char*filename) read_until(some conditional dependent on the contents of the file, and the arg supplied) } Im then instantiating hundreds of these object and iterate over them using several 'read_until()' for each file until allfiles is at eof. I'm failing to see any clever way to do this in c, the only solution I can come up with is making an array of FILE pointers, and do the same with all the private member data from my class. But this seems very messy, can I implement the functionality of my class as a function pointer, or anything better, I think I'm missing a fundamental design pattern? The files are way to big to have all in memory, so reading everything from every file is not feasible Thanks

    Read the article

  • .NET XmlSerializer fails with List<T>

    - by Redshirt
    I'm using a singleton class to save all my settings info. It's first utilized by calling Settings.ValidateSettings(@"C:\MyApp"). The problem I'm having is that 'List Contacts' is causing the xmlserializer to fail to write the settings file, or to load said settings. If I comment out the List<T> then I have no problems saving/loading the xml file. What am I doing wrong? // The actual settings to save public class MyAppSettings { public bool FirstLoad { get; set; } public string VehicleFolderName { get; set; } public string ContactFolderName { get; set; } public List<ContactInfo> Contacts { get { if (contacts == null) contacts = new List<ContactInfo>(); return contacts; } set { contacts = value; } } private List<ContactInfo> contacts; } // The class in which the settings are manipulated public static class Settings { public static string SettingPath; private static MyAppSettings instance; public static MyAppSettings Instance { get { if (instance == null) instance = new MyAppSettings(); return instance; } set { instance = value; } } public static void InitializeSettings(string path) { SettingPath = Path.GetFullPath(path + "\\MyApp.xml"); if (File.Exists(SettingPath)) { LoadSettings(); } else { Instance.FirstLoad = true; Instance.VehicleFolderName = "Cars"; Instance.ContactFolderName = "Contacts"; SaveSettingsFile(); } } // load the settings from the xml file private static void LoadSettings() { XmlSerializer ser = new XmlSerializer(typeof(MyAppSettings)); TextReader reader = new StreamReader(SettingPath); Instance = (MyAppSettings)ser.Deserialize(reader); reader.Close(); } // Save the settings to the xml file public static void SaveSettingsFile() { XmlSerializer ser = new XmlSerializer(typeof(MyAppSettings)); TextWriter writer = new StreamWriter(SettingPath); ser.Serialize(writer, Settings.Instance); writer.Close(); } public static bool ValidateSettings(string initialFolder) { try { Settings.InitializeSettings(initialFolder); } catch (Exception e) { return false; } // Do some validation logic here return true; } } // A utility class to contain each contact detail public class ContactInfo { public string ContactID; public string Name; public string PhoneNumber; public string Details; public bool Active; public int SortOrder; }

    Read the article

  • Setting up apache rewrite rule to only forward if in a directory

    - by wooowoopo
    Hi, I currently have a site setup with the following in httpd.conf: <VirtualHost x.x.x.x:80> ServerName testsite ExpiresActive On ExpiresByType image/gif A2592000 ExpiresByType image/png A2592000 ExpiresByType image/jpg A2592000 ExpiresByType image/jpeg A2592000 ExpiresByType text/css A2592000 ExpiresByType application/x-javascript A1 ExpiresByType text/javascript A1 AddOutputFilterByType DEFLATE text/html text/plain text/xml text/css application/x-javascript text/javascript DocumentRoot /usr/local/www/apache22/data/thesite/trunk RewriteEngine On RewriteRule !\.(htc|js|tiff|gif|css|jpg|png|swf|ico|jar|html|doc|pdf|htm|xml)$ %{DOCUMENT_ROOT}/../platform.php [L] </VirtualHost> Where x.x.x.x is my IP. At the moment it forwards anything which is not in the set (htc|js|tiff|gif|css|jpg|png|swf|ico|jar|html|doc|pdf|htm|xml) to platform.php How htp://x.x.x.x/phpmyadmin to also forward. Would it be possible to only perform this rewrite conidtion if I am in a subdirectory. Eg. http://x.x.x.x/projectone So htp://x.x.x.x/projectone/login would direct to the platform.php Thanks

    Read the article

  • How do you import an EPS file in Inkscape?

    - by Neil
    I'm using Inkscape, and I'm trying to import an EPS file to use it as a vector and eventually save it as an SVG. This link here mentions several methods: http://www.inkscapeforum.com/viewtopic.php?f=5&t=797 But the responses aren't rated since it's a forum, so I thought I'd ask here to find the best answer. I'd prefer not to have to use some website to convert the file to a PDF first. Either way, when I import an EPS into Inkscape, or use the website to convert it to a PDF, in both cases the resulting file loses all colour and gradients, and the EPS file gets cut off on the right side. It looks like ps2pdf is clipping the file incorrectly, and Inkscape is eliminating the colour. I have these version installed in Ubuntu Lucid Linux: Inskape 0.47.0-2ubuntu2 Ghostscript 8.71.dfsg.1-0ubuntu5.3

    Read the article

< Previous Page | 109 110 111 112 113 114 115 116 117 118 119 120  | Next Page >