Search Results

Search found 12765 results on 511 pages for 'format()'.

Page 338/511 | < Previous Page | 334 335 336 337 338 339 340 341 342 343 344 345  | Next Page >

  • ASP.NET how to get Cache in KB used for this application ?

    - by eugeneK
    I need to know what is Cache size. I've read on this site solution for close problem but it doesn't solves mine. As i know i can get values from PerMon, here is function public static string getCacheSize() { PerformanceCounter pc = new PerformanceCounter("ASP.NET Applications", "Cache % Machine Memory Limit Used","__TOTAL__", true); return string.Format("{0:0.00}%", pc.NextValue()); } 1.it gives me percents when i need KB and there is no item closest to this one in PerfMon 2.it shows 70.5% used while all memory usage is about 50% any help ?

    Read the article

  • Scrapy Not Returning Additonal Info from Scraped Link in Item via Request Callback

    - by zoonosis
    Basically the code below scrapes the first 5 items of a table. One of the fields is another href and clicking on that href provides more info which I want to collect and add to the original item. So parse is supposed to pass the semi populated item to parse_next_page which then scrapes the next bit and should return the completed item back to parse Running the code below only returns the info collected in parse If I change the return items to return request I get a completed item with all 3 "things" but I only get 1 of the rows, not all 5. Im sure its something simple, I just can't see it. class ThingSpider(BaseSpider): name = "thing" allowed_domains = ["somepage.com"] start_urls = [ "http://www.somepage.com" ] def parse(self, response): hxs = HtmlXPathSelector(response) items = [] for x in range (1,6): item = ScrapyItem() str_selector = '//tr[@name="row{0}"]'.format(x) item['thing1'] = hxs.select(str_selector")]/a/text()').extract() item['thing2'] = hxs.select(str_selector")]/a/@href').extract() print 'hello' request = Request("www.nextpage.com", callback=self.parse_next_page,meta={'item':item}) print 'hello2' request.meta['item'] = item items.append(item) return items def parse_next_page(self, response): print 'stuff' hxs = HtmlXPathSelector(response) item = response.meta['item'] item['thing3'] = hxs.select('//div/ul/li[1]/span[2]/text()').extract() return item

    Read the article

  • envets is not displayed on my fullcalendar

    - by ChangJiu
    Hi BalusC! I have used your method at above in my servelt. [CalendarMap] public void doGet(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException { Map map = new HashMap(); map.put("id", 115); map.put("title", "changjiu"); map.put("start", new SimpleDateFormat("yyyy-MM-15").format(new Date())); map.put("url", "http://yahoo.com/"); // Convert to JSON string. String json = new Gson().toJson(map); // Write JSON string. response.setContentType("application/json"); response.setCharacterEncoding("UTF-8"); response.getWriter().write(json); } I want to display it my fullcalendar as follow. $(document).ready(function() { $('#calendar').fullCalendar({ eventSources: [ "CalendarMap" ] }); }); but it's not worked! Can you help me? thank you!

    Read the article

  • Can anyone help me out in writing a xslt-fo for this xml file?

    - by atrueguy
    Currencies By Country Australia Australian Dollar Austria Schilling Belgium Belgium Franc Canada Canadian Dollar England Pound Fiji Fijian Dollar France Franc Germany DMark Hong Kong Hong Kong Dollar Italy Lira Japan Yen Netherlands Guilder Switzerland SFranc USA Dollar I started to write a xsl-fo to format the above xml in to a table, but I am really struggling with the flow of the tags, can any one help me out in writing a xsl-fo for this xml file? Is it possible for anyone to suggest me material for staring with xsl-fo, so that I can code my own xsl-fo., because the tags and syntax are very difficult to understand.

    Read the article

  • US (Postal) ZIP codes: ZIP+4 vs. ZIP in web applications

    - by FreekOne
    Hi guys, I am currently writing a web application intended for US users that asks them to input their ZIP code and I just found out about the ZIP+4 code. Since I am not from the US and getting a user's correct ZIP code is important, I have no idea which format I should use. Could anyone (preferably from the US) please clarify what's the deal with the +4 digits and how important are they ? Is it safe to use only the plain 5-digit ZIP ? Thank you in advance !

    Read the article

  • How do I configure the Python logging module in Django?

    - by mipadi
    I'm trying to configure logging for a Django app using the Python logging module. I have placed the following bit of configuration code in my Django project's settings.py file: import logging import logging.handlers import os date_fmt = '%m/%d/%Y %H:%M:%S' log_formatter = logging.Formatter(u'[%(asctime)s] %(levelname)-7s: %(message)s (%(filename)s:%(lineno)d)', datefmt=date_fmt) log_dir = os.path.join(PROJECT_DIR, "var", "log", "my_app") log_name = os.path.join(log_dir, "nyrb.log") bytes = 1024 * 1024 # 1 MB if not os.path.exists(log_dir): os.makedirs(log_dir) handler = logging.handlers.RotatingFileHandler(log_name, maxBytes=bytes, backupCount=7) handler.setFormatter(log_formatter) handler.setLevel(logging.DEBUG) logging.getLogger().setLevel(logging.DEBUG) logging.getLogger().addHandler(handler) logging.getLogger(__name__).info("Initialized logging subsystem") At startup, I get a couple Django-related messages, as well as the "Initialized logging subsystem", in the log files, but then all the log messages end up going to the web server logs (/var/log/apache2/error.log, since I'm using Apache), and use the standard log format (not the formatter I designated). Am I configuring logging incorrectly?

    Read the article

  • RDF Usage Rates for Syndication

    - by David in Dakota
    Is RDF still used widely for content syndication? Specifically, I know only of Slashdot as a large scale website syndicating content in that format (say versus RSS). Understandably this might seem vague to answer so more specifically: Can anyone list any larger sites similar in scale to Amazon or CNN using it? Any web based publishing platforms (Wordpress, Joomla, etc...) that generate syndication feeds with this xml vocabulary. Any other more quantifiable evidence that it is used for syndication online. I understand that RDF may be a parent specification but in this case I'm talking about sites that syndicate content using <rdf as a root element and heavily leveraging elements from the RDF namespace: http://www.w3.org/1999/02/22-rdf-syntax-ns#

    Read the article

  • Surrogate key for date dimension?

    - by Navin
    There are 2 school of thoughts : Use surrogate key preferbly in the format of YYYYMMDD as this will always be sequential. Eliminate Date dimension surrogate key and use actual date instead. My Questions to experts on dimension modeling are : 1> Which design would you prefer and why ? 2> How should we handle unknown values in each of the cases, Can we simply place NULL in Fact table for unknown dates as Foreign Key can be NULL (if no why)? 3> If we need to partition fact table on date column ,how would we achieve that in case 1. I am inclined towards using actual date and using NULL to represent UNKNOWN dates in fact table , as date related validation on fact can be done without need to look in to dimension table.

    Read the article

  • steps for facebook connect graph api

    - by dskanth
    Hi, iam using facebook connect in my site, and i want to know how do i use the graph api for authenticating the user. I followed these steps: 1) Initially i sent a request for "code", by clicking on the facebook icon in my site: https://graph.facebook.com/oauth/authorize? client_id=xxx&redirect_uri=http://xxxxxxxx 2) And then after getting a code, i sent a request for "access token", by clicking on another link in my site: https://graph.facebook.com/oauth/access_token? client_id=xxx&redirect_uri=http://xxxxxxx&client_secret=xxxx&code=xxxxx 3) And after i got the token, i sent another request for getting user data, by clicking on yet another link: https://graph.facebook.com/me?access_token=xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx Then finally i got the user data in array format, which i need to parse for my required data like user's firstname, email, etc. Now my question is that how i can automate this process with just one click ? Right now, iam using 3 different links for sending those requests. Can anyone suggest a solution ?

    Read the article

  • How would I detect horizontal-black lines using ImageMagick?

    - by Zando
    So I have what is essentially a spreadsheet in TIFF format. There is some uniformity to it...for example, all the column widths are the same. I want to de-limit this sheet by those known-column widths and basically create lots of little graphic files, one for each cell, and run OCR on them and store it into a database. The problem is that the horizontal lines are not all the same height, so I need to use some kind of graphics library command to check if every pixel across is the same color (i.e. black). And if so, then I know I've reached the height-delimiter for a cell. How would I go about doing that? (I'm using RMagick)

    Read the article

  • Tools and ways to generate HTML help for built-in help system (QtHelp)?

    - by BastiBense
    Hello, I'm in the progress of implementing a built-in help system based on QtHelp into my application. Since QtHelp is based on Qt's help collection files, I need to produce a set of HTML pages. Since I won't be writing the documentation alone (a few of my colleagues will write, too), I am looking for the best way to produce these files. We are internally using a Wiki, and I know that the documentation should be written in some kind of markup language instead of giving all authors a WYSIWYG HTML editor. So my question is, are there tools out there which help with the process of generating documentation that can be exported as a set of HTML files, and possibly, as PDFs, too?. Thanks in advance! Update: I'm already using Doxygen for C++ documentation generation. But I'm not exactly looking for an API-Documentation generator, but something like LaTeX, which allows you to format the documentation contents like a markup document (much like a Wiki).

    Read the article

  • Trouble using SFML with GCC and OS X

    - by user1322654
    I've been trying to get SFML working for a while now and I've been trying to get it working using GCC. I'm on OS X by the way. I followed the standard Linux instructions and using the Linux 64-bit download however when it comes to compiling... g++ -o testing main.cpp -lsfml-system This happens: main.cpp: In function ‘int main()’: main.cpp:7: error: ‘class sf::Clock’ has no member named ‘GetElapsedTime’ main.cpp:9: error: ‘class sf::Clock’ has no member named ‘GetElapsedTime’ main.cpp:10: error: ‘Sleep’ is not a member of ‘sf’ So I thought it could be due to not using includes, so I changed my gcc compile command to: g++ -o testing main.cpp -I ~/SFML-1.6/include/ -lsfml-system and now I'm getting this error: ld: warning: ignoring file /usr/local/lib/libsfml-system.so, file was built for unsupported file format which is not the architecture being linked (x86_64) Undefined symbols for architecture x86_64: "sf::Clock::Clock()", referenced from: _main in ccZEiB7b.o "sf::Clock::GetElapsedTime() const", referenced from: _main in ccZEiB7b.o "sf::Sleep(float)", referenced from: _main in ccZEiB7b.o ld: symbol(s) not found for architecture x86_64 collect2: ld returned 1 exit status** And I have no idea what to do to fix it.

    Read the article

  • feature extraction from acoustic signals

    - by Dolphin
    Hi everyone, It's been a while. I found APIs in Java for extracting features from acoustic audio files and symbolic files separately. But now I have a problem in mapping from low level wav audio features to high level midi features. i.e. I need to write the extracted wav audio features on to midi format. But I cannot think of anything even close to it. Can someone pls provide me some insight as in how I can approach this. Greatly appreciate your responses. Advance thanks

    Read the article

  • Web-based document merge solution?

    - by rugcutter
    We are looking for a web-based document merge solution. Our application is a web-based project management tool built using Xataface - PHP on Windows IIS + mySQL. We have a function that allows the user to generate a status report in Microsoft Word format based on data in the tool. Currently this function is implemented using LiveDocX. We have a status report template, and LiveDocX performs the merge into the template using data from our project management tool. The main drawback is LiveDocx is web-service based. We are looking to replace LiveDocX in order to reduce our dependence on the up-time of a third-party web-service that we cannot control. Does anyone have any suggestions on a web based document merge solution that I can install on my IIS or PHP based server?

    Read the article

  • Collaborative localization website supporting Android strings.xml?

    - by Nicolas Raoul
    My open source Android application has internationalization done the Android way, with strings.xml files. The community has many people from many countries, and they are willing to contribute/improve translations using a collaborative website. There is Launchpad but it only supports the gettext format so we would have to use scripts, not very convenient. There is Crowdin but somehow this website seems dead, nearly no projects, and the download links do not work. Actually we started using Crowdin but all download links fail to give any strings.xml file back, see here. What website is convenient for translating open source Android applications?

    Read the article

  • How to save bytes to an image and access it from Bottle

    - by Graham Smith
    I'm working on an API wrapper for Snapchat using Python and Bottle, but in order to return the file (retrieved by the Python script) I have to save the bytes (returned by Snapchat) to a .jpg file. I'm not quite sure how I will do this and still be able to access the file so that it can be returned. Here's what I have so far, but it returns a 404. @route('/image') def image(): username = request.query.username token = request.query.auth_token img_id = request.query.id return get_blob(username, token, img_id) def get_blob(usr, token, img_id): # Form URL and download encrypted "blob" blob_url = "https://feelinsonice.appspot.com/ph/blob?id={}".format(img_id) blob_url += "&username=" + usr + "&timestamp=" + str(timestamp()) + "&req_token=" + req_token(token) enc_blob = requests.get(blob_url).content # Save decrypted image FileUpload.save('/images/' + img_id + '.jpg') img = open('images/' + img_id + '.jpg', 'wb') img.write(decrypt(enc_blob)) img.close() return static_file(img_id + '.jpg', root='/images/')

    Read the article

  • How to get the git commit count?

    - by Splo
    I'd like to get the number of commits of my git repository, a bit like SVN revision numbers. The goal is to use it as a unique, incrementing build number. I currently do like that, on Unix/Cygwin/msysGit: git log --pretty=format:'' | wc -l But I feel it's a bit of a hack. Is there a better way to do that? It would be cool if I actually didn't need wc or even git, so it could work on a bare Windows. Just read a file or a directory structure ...

    Read the article

  • using core data with web services

    - by Jayshree
    Hi. i am a noob in xcode. I am developing an iphone app where i need to send and receive data from a web service. And i need to store them temporarily in my app. i dont want to use sqlite. so i was wondering if i should use core data for this purpose. I read some articles but i still dont have a clear picture of how to do it, coz i have used core data only with sqlite. I want to do the following things : Will receive table data from a web service. Have to perform certain calculations on those fields. Will send the data back in xml format to the server. How do i convert the xml data into int, date or any other data type? and how do i store it in managed data objects? Can anyone please help me with this??? thnx for your time.

    Read the article

  • Can Eclipse parse and use emacs-style meta information in source code?

    - by ataylor
    In emacs, it is possible to start a file off with a line this: /* -*- mode: java; c-basic-offset: 4; indent-tabs-mode: nil -*- */ This instructs emacs to use 4 spaces for indentation. I like the idea of storing this coding style meta-information directly and explicitly in the source code. Are there any options for doing this in other IDEs? Does eclipse in particular have the ability to configure itself from a line in the emacs format or something equivalent?

    Read the article

  • Change the output of the Html.menu() asp control .

    - by user327893
    Im creating a asp mvc application and im using a sitemap to create my menu. Is it possible to change the output of the Html.menu() asp control? normal output: <ul> <li><a href="#">Item 1</li> <li><a href="#">Item 2</li> </ul> needed output: <ul> <li><a href="#"><span>Item 1</span></li> <li><a href="#"><span>Item 2</span></li> </ul> Or should i filter true the nodes of the sitemap to get the format i want?? TY in advance:)

    Read the article

  • How to parse a binary file using Javascript and Ajax

    - by Alex Jeffery
    I am trying to use JQuery to pull a binary file from a webserver, parse it in Javascript and display the contents. I can get the file ok and parse some of the file correctly. How ever I am running into trouble with one byte not coming out as expected. I am parsing the file a byte at a time, it is correct until I get to the hex value B6 where I am getting FD instead of B6. Function to read a byte data.charCodeAt(0) & 0xff; File As Hex: 02 00 00 00 55 4C 04 00 B6 00 00 00 The format I want to parse the file out into. short: 0002 short: 0000 string: UL short: 0004 long: 0000B6 Any hints as to why the last value is incorrect?

    Read the article

  • Storing year/make/model in a database?

    - by Mark
    Here's what I'm thinking (excuse the Django format): class VehicleMake(Model): name = CharField(max_length=50) class VehicleModel(Model): make = ForeignKey(VehicleMake) name = CharField(max_length=50) class VehicleYear(Model): model = ForeignKey(VehicleModel) year = PositiveIntegerField() This is going to be used in those contingent drop-down select boxes, which would visually be laid out like [- Year -][- Make -][- Model -]. So, to query the data I need I would first have to select all distinct years from the years table, sorted descending. Then I'd find all the vehicle makes that have produced a model in that year. And then all the models by that make in that year. Is this a good way to do it, or should I re-arrange the foreign keys somehow? Or use a many-to-many table for the years/models so that no year is repeated?

    Read the article

  • Effective methods for reading and writing large files in C

    - by Bertholt Stutley Johnson
    I'm writing an application that deals with very large user-generated input files. The program will copy about 95 percent of the file, effectively duplicating it and switching a few words and values in the copy, and then appending the copy (in chunks) to the original file, such that each block (consisting of between 10 and 50 lines) in the original is followed by the copied and modified block, and then the next original block, and so on. The user-generated input conforms to a certain format, and it is highly unlikely that any line in the original file is longer than 100 characters long. Which would be the better approach? a) To use one file pointer and use variables that hold the current position of how much has been read and where to write to, seeking the file pointer back and forth to read and write; or b) To use multiple file pointers, one for reading and one for writing. I am mostly concerned with the efficiency of the program, as the input files will reach up to 25,000 lines, each about 50 characters long. Thanks!

    Read the article

  • How change Subversion's default binary mime-type?

    - by lamcro
    Subversion sets a binary file's svn:mime-type property to application/octet-stream by default. I need to change this default to some other mime-type. When I import for the first time this code, I would like Subversion to set mime-type to the one I choose. The reason is that my code base contains code in binary files (proprietary format), and I have the applications necessary to emulate diff and diff3 for these. But Subversion does not let me due to their default mime-type. Please note: There is no default extension (*.jar, *.py, etc) for these code files. Some files don't even have an extension. So configuring mime-type by file extension is not possible.

    Read the article

  • Long Double in C

    - by reubensammut
    I've been reading the C Primer Plus book and got to this example #include <stdio.h> int main(void) { float aboat = 32000.0; double abet = 2.14e9; long double dip = 5.32e-5; printf("%f can be written %e\n", aboat, aboat); printf("%f can be written %e\n", abet, abet); printf("%f can be written %e\n", dip, dip); return 0; } After I ran this on my macbook I was quite shocked at the output: 32000.000000 can be written 3.200000e+04 2140000000.000000 can be written 2.140000e+09 2140000000.000000 can be written 2.140000e+09 So I looked round and found out that the correct format to display long double is to use %Lf. However I still can't understand why I got the double abet value instead of what I got when I ran it on Cygwin, Ubuntu and iDeneb which is roughly -1950228512509697486020297654959439872418023994430148306244153100897726713609 013030397828640261329800797420159101801613476402327600937901161313172717568.0 00000 can be written 2.725000e+02 Any ideas?

    Read the article

< Previous Page | 334 335 336 337 338 339 340 341 342 343 344 345  | Next Page >