Search Results

Search found 16794 results on 672 pages for 'memory usage'.

Page 418/672 | < Previous Page | 414 415 416 417 418 419 420 421 422 423 424 425  | Next Page >

  • Etiquette when asking questions in an IRC channel

    - by Zarkonnen
    Many larger OSS projects maintain IRC channels to discuss their usage or development. When I get stuck on using a project, having tried and failed to find information on the web, one of the ways I try to figure out what to do is to go into the IRC channel and ask. But my questions are invariably completely ignored by the people in the channel. If there was silence when I entered, there will still be silence. If there is an ongoing conversation, it carries on unperturbed. I leave the channel open for a few hours, hoping that maybe someone will eventually engage me, but nothing happens. So I worry that I'm being rude in some way I don't understand, or breaking some unspoken rule and being ignored for it. I try to make my questions polite, to the point, and grammatical, and try to indicate that I've tried the obvious solutions and why they didn't work. I understand that I'm obviously a complete stranger to the people on the channel, but I'm not sure how to fix this. Should I just lurk in the channel, saying nothing, for a week? That seems absurd too. A typical message I send might be "Hello all - I've been trying to get Foo to work, but I keep on getting a BarException. I tried resetting the Quux, but this doesn't seem to do anything. Does anyone have a suggestion on what I could try?"

    Read the article

  • How to proceed when a bug in open source libraries is suspected?

    - by Suma
    We are using some open source libraries in our projects. Sometimes there are some issues found in some of them (most likely library bugs, but it may also be a wrong usage from our side, especially when sometimes documentation is not exactly 100 % complete). As the libraries are often quite complex, debugging them to pinpoint the source of the problem is sometimes quite hard. Can you help me to summarize what other options are there and how to exactly proceed with them? I have just recently hit some strange problems when using TCMalloc (Google scalable memory allocator) on Windows, so I would most welcome answers which would apply to this particular library, but more general answers are good as well. 1) Ask the maintainer/owner of the project for assistance. How can this be done? 2) Hire someone to identify and fix the issue. How to do this? How can I find someone with enough expertise in some particular library? ... any other options?

    Read the article

  • Help me choose an Open-Source license

    - by Spartan-117A
    So I've done lots of open-source work. I have released many projects, most of which have fallen under GPL, LGPL, or BSD licensing. Now I have a new project (an implementation library), and I can't find a license that meets my needs (although I believe one may exist, hence this question). This is the list of things I'm looking for in the license. Appropriate credit given for ALL usage or derivative works. No warranty expressed or implied. The library may be freely used in ANY other open-source/free-software product (regardless of license, GPL, BSD, EPL, etc). The library may be used in closed-source/commercial products ONLY WITH WRITTEN PERMISSION. GPL - Useless to me, obviously, as it completely precludes any and all closed-source use, violating requirement (4). BSD/LGPL/MIT - Won't work, because they wouldn't require closed-source developers to get my permission, violating requirement (4). If it wasn't for that, BSD (FreeBSD in particular) would look like a good choice here. EPL/MPL - Won't work either, as the code couldn't be combined with GPL-code, therefore violating requirement (3). Also I'm pretty sure they allow commercial works without asking permission, so they don't meet (4) either. Dual-licensing is an option, but in that case, what combination would hold to all four requirements? Basically, I want BSD minus the commercial use, plus an option to use in commercial/closed-source as long as the developer has my written permission. EDIT: At the moment, thinking something like multiple-licensing under GPL/LGPL plus something else for commercial?

    Read the article

  • How do I debug an overheating problem?

    - by Tab
    Hello guys. I have a problem with my Laptop (Dell Inspiron 1564 Core i5 4GB Ram VGA ATI Mobility Radeon HD 4300 running Ubuntu 10.10 32bit). It shuts down abruptly without even a lag in the application I am working with before shutdown. I think it's overheating problem. Actually the laptop is hot all the time when I am running Ubuntu. When I switch back to windows, even with intense load it won't shutdown or show any problem as long as I keep proper ventilation (when the air openings are blocked it does the same). Actually on Ubuntu i don't usually do things that need much CPU power, usually surfing internet, coding web pages and sometimes playing with python and ruby. I am not enabling desktop effects so no GPU load except the normal GNOME gui. Now as I am writing the Processor load in the panel monitor applet is 0%, Memory 11% by programs, 22% by cache. And i have CPU Frequency monitor for each of the 4 cores set to 1.20 Ghz (the lowest possible value, i am not sure if this applet does really limit CPU usage). Running sensors in terminal gave me temp1: +26.8°C (crit = +100.0°C) temp2: +0.0°C (crit = +100.0°C) hddtemp /dev/sda at the terminal gave me /dev/sda: WDC WD3200BEVT-75ZCT2: 46°C All that fine but the laptop is Really hot i can feel it in the keyboard, mouse pad is painful to touch, and the fan is always spinning. I am also placing 2 small fans running on USB under the laptop right now and the laptop is lifted over the fans so it's well ventilated. When I am running windows it doesn't get that hot except when there is a really big load on the CPU and this is keeping me away from using Linux for everyday tasks. Actually I don't care much for speed as I can deal with low speed it's not going to shutdown abruptly. So please if you can help me and tell me what are the possible causes, where should I start ?

    Read the article

  • Questioning one of the arguments for dependency injection: Why is creating an object graph hard?

    - by oberlies
    Dependency injection frameworks like Google Guice give the following motivation for their usage (source): To construct an object, you first build its dependencies. But to build each dependency, you need its dependencies, and so on. So when you build an object, you really need to build an object graph. Building object graphs by hand is labour intensive (...) and makes testing difficult. But I don't buy this argument: Even without dependency injection, I can write classes which are both easy to instantiate and convenient to test. E.g. the example from the Guice motivation page could be rewritten in the following way: class BillingService { private final CreditCardProcessor processor; private final TransactionLog transactionLog; // constructor for tests, taking all collaborators as parameters BillingService(CreditCardProcessor processor, TransactionLog transactionLog) { this.processor = processor; this.transactionLog = transactionLog; } // constructor for production, calling the (productive) constructors of the collaborators public BillingService() { this(new PaypalCreditCardProcessor(), new DatabaseTransactionLog()); } public Receipt chargeOrder(PizzaOrder order, CreditCard creditCard) { ... } } So there may be other arguments for dependency injection (which are out of scope for this question!), but easy creation of testable object graphs is not one of them, is it?

    Read the article

  • How to represent a graph with multiple edges allowed between nodes and edges that can selectively disappear

    - by Pops
    I'm trying to figure out what sort of data structure to use for modeling some hypothetical, idealized network usage. In my scenario, a number of users who are hostile to each other are all trying to form networks of computers where all potential connections are known. The computers that one user needs to connect may not be the same as the ones another user needs to connect, though; user 1 might need to connect computers A, B and D while user 2 might need to connect computers B, C and E. Image generated with the help of NCTM Graph Creator I think the core of this is going to be an undirected cyclic graph, with nodes representing computers and edges representing Ethernet cables. However, due to the nature of the scenario, there are a few uncommon features that rule out adjacency lists and adjacency matrices (at least, without non-trivial modifications): edges can become restricted-use; that is, if one user acquires a given network connection, no other user may use that connection in the example, the green user cannot possibly connect to computer A, but the red user has connected B to E despite not having a direct link between them in some cases, a given pair of nodes will be connected by more than one edge in the example, there are two independent cables running from D to E, so the green and blue users were both able to connect those machines directly; however, red can no longer make such a connection if two computers are connected by more than one cable, each user may own no more than one of those cables I'll need to do several operations on this graph, such as: determining whether any particular pair of computers is connected for a given user identifying the optimal path for a given user to connect target computers identifying the highest-latency computer connection for a given user (i.e. longest path without branching) My first thought was to simply create a collection of all of the edges, but that's terrible for searching. The best thing I can think to do now is to modify an adjacency list so that each item in the list contains not only the edge length but also its cost and current owner. Is this a sensible approach? Assuming space is not a concern, would it be reasonable to create multiple copies of the graph (one for each user) rather than a single graph?

    Read the article

  • Is chroot the right choice for my use case?

    - by Anthony
    Backstory: I am working on setting up a MineCraft server and want to allow admins to have ssh access to the MineCraft server console and appropriate mc server files, but not the whole system. The console provided by the minecraft server is only available to the user that launched the process. In addition, the admins will need terminal access to some basic cli tools such as wget, cp, mv, rm, and a text editor. Plan: I have already setup the ssh aspect of things, requiring pre-shared keys and whatnot. Setup a jailed environment in which all user activity will be contained. Setup user accounts. - The first user account will be the minecraft user. The minecraft user will start the MC server in a multiuser screen session and allow the other admins to attach to it. - Subsequent users should have their own /home directory for normal usage. Setup acl for the appropriate files to allow each user to edit the mc server files. No one will be doing system updates, nor will anyone be installing any programs, so I'll be the only user with sudo. The Issues: I don't want the ssh users to have access to the whole system. Users will still need to use wget or curl to update the mc server files. Is chroot the right tool for this use case, or is there something more appropriate for the job? I have no experience setting up a chroot environment and have found several tools to aid in this process. Jailkit seems to be the most robust, but it's not in the standard repos.

    Read the article

  • What is a 'good number' of exceptions to implement for my library?

    - by Fuzz
    I've always wondered how many different exception classes I should implement and throw for various pieces of my software. My particular development is usually C++/C#/Java related, but I believe this is a question for all languages. I want to understand what is a good number of different exceptions to throw, and what the developer community expect of a good library. The trade-offs I see include: More exception classes can allow very fine grain levels of error handling for API users (prone to user configuration or data errors, or files not being found) More exception classes allows error specific information to be embedded in the exception, rather than just a string message or error code More exception classes can mean more code maintenance More exception classes can mean the API is less approachable to users The scenarios I wish to understand exception usage in include: During 'configuration' stage, which might include loading files or setting parameters During an 'operation' type phase where the library might be running tasks and doing some work, perhaps in another thread Other patterns of error reporting without using exceptions, or less exceptions (as a comparison) might include: Less exceptions, but embedding an error code that can be used as a lookup Returning error codes and flags directly from functions (sometimes not possible from threads) Implemented an event or callback system upon error (avoids stack unwinding) As developers, what do you prefer to see? If there are MANY exceptions, do you bother error handling them separately anyway? Do you have a preference for error handling types depending on the stage of operation?

    Read the article

  • Problems syncing photos and strange effects of uploaded files from other devices

    - by Daniel
    I have a Galaxy Spica (GT-i5700) Android v2.1, rooted with Leshak dev 7 #123. But never mind the root info, the problem would be the same unrooted. The photos from this phone is stored in "sdcard/images", nevertheless the phone also creates a "sdcard/DCIM" but only stores some thumbnails there. Problem nr 1: U1 only reads the DCIM-folder for automatic photo-upload. So photos stored in this phone is not uploaded. If I move photos to "DCIM" folder, U1 recognises the photos and start uploading them. Possible solution: Could there be an option in the settings, to set preferred photo folder? Problem nr 2: Out of 74 pictures, 12 did not get uploaded. Pressing "Retry failed transfers" in Settings does nothing. Pressing the files where status is "Upload failed, tap to retry" only changes the status to "Uploading..." but nothing gets uploaded. If I upload another file to U1, it is uploaded directly without any problem. It has nothing to do with file size, 1,1 MB files has been uploaded fine whilst some failed are 0,8 MB. Problem nr 3: The photos from DCIM are in my case uploaded to a folder called "Pictures - GT-I5700" in U1. If I log in to the homepage and from there upload another photo in "Pictures - GT-I5700", it shows up in U1 on my phone fine. But when I tap it, U1 downloads the photo to "sdcard/U1/Pictures - GT-I5700". If it sync photos from "sdcard/DCIM" to a specific folder, why not also download files to the same folder from which it is synced? After a while of usage, syncing and uploading files from different clients it would be a mishmash of folders and places files are stored and considering that I see no use of U1 at all. Another question: If my SD card in some way breaks down/some folders cannot be read/card temporarly changed and U1 is running, does U1 consider that as files deleted and also delete from the cloud?

    Read the article

  • Advisor Webcast: Oracle Payments Funds Disbursement Analyzer

    - by SamanthaF-Oracle
    Have you registered for the Oracle Payments Funds Disbursement Analyzer Advisor Webcast in June? Don't delay! This one-hour session is recommended for technical and functional users of the Oracle Payments product who would like an introduction to Oracle Payment Funds Disbursement Analyzer. The session will highlight how to use the Payments Funds Disbursement Analyzer to identify and troubleshoot issues with Payment Process Request (PPR) and other Payments related processes/setups. TOPICS WILL INCLUDE: Overview of Oracle Payments Funds Disbursement Analyzer How to install and run Proactive usage of the Analyzer Using the Analyzer to troubleshoot When?  Wednesday, June 25, 2014 11:00 am, Eastern Daylight Time (New York, GMT-04:00) Wednesday, June 25, 2014 8:00 am, Pacific Daylight Time (San Francisco, GMT-07:00) Wednesday, June 25, 2014 4:00 pm, GMT Summer Time (London, GMT+01:00) Wednesday, June 25, 2014 8:30 pm, India Time (Mumbai, GMT+05:30) A short, live demonstration (only if applicable) and question and answer period will be included. Oracle Advisor Webcasts are dedicated to building your awareness around our products and services. This session does not replace offerings from Oracle Global Support Services. See Doc ID 1671948.1 for further details and to register your interest.

    Read the article

  • OUCH! Laptop running SUPER HOT after 12.10 upgrade!

    - by dinkelk
    I was running 12.04 for 6 months, my laptop ran almost silently and cool enough to hold on my lap. I updated to 12.10 and now my computer gets too hot to hold on my lap and the fan is constantly running on full blast. This is the output of sensors: acpitz-virtual-0 Adapter: Virtual device temp1: +84.0°C (crit = +99.0°C) coretemp-isa-0000 Adapter: ISA adapter Physical id 0: +84.0°C (high = +86.0°C, crit = +100.0°C) Core 0: +74.0°C (high = +86.0°C, crit = +100.0°C) Core 1: +72.0°C (high = +86.0°C, crit = +100.0°C) Core 2: +75.0°C (high = +86.0°C, crit = +100.0°C) Core 3: +84.0°C (high = +86.0°C, crit = +100.0°C) radeon-pci-0100 Adapter: PCI adapter temp1: +76.0°C I have an HP Pavilion dv6, i7, amd radeon graphics. Please let me know if you need additional information. What could be different between the two Ubuntu editions that caused such a drastic change? Edit 1: Per @Paul's suggestion, I ran htop to try to narrow down the problem. Here is the result! (left side of terminal) (right side of terminal) This is about 10 minutes after boot-up, htop, yakuake, and a chrome page with 1 tab opened to this question are all that I have manually opened. The most taxing program to the CPU is htop itself. I think that the problem must lie elsewhere; my temps are already up to ~65C for the CPU and ~69C for the GPU, with nearly 0% CPU usage.

    Read the article

  • Laptop runs HOT after 12.10 upgrade!

    - by dinkelk
    I was running 12.04 for 6 months, my laptop ran almost silently and cool enough to hold on my lap. I updated to 12.10 and now my computer gets too hot to hold on my lap and the fan is constantly running on full blast. This is the output of sensors: acpitz-virtual-0 Adapter: Virtual device temp1: +84.0°C (crit = +99.0°C) coretemp-isa-0000 Adapter: ISA adapter Physical id 0: +84.0°C (high = +86.0°C, crit = +100.0°C) Core 0: +74.0°C (high = +86.0°C, crit = +100.0°C) Core 1: +72.0°C (high = +86.0°C, crit = +100.0°C) Core 2: +75.0°C (high = +86.0°C, crit = +100.0°C) Core 3: +84.0°C (high = +86.0°C, crit = +100.0°C) radeon-pci-0100 Adapter: PCI adapter temp1: +76.0°C I have an HP Pavilion dv6, i7, amd radeon graphics. Please let me know if you need additional information. What could be different between the two Ubuntu additions that caused such a drastic change? Edit 1: Per @Paul's suggestion, I ran htop to try to narrow down the problem. Here is the result! This is about 10 minutes after boot-up, htop, yakuake, and a chrome page with 1 tab opened to this question are all that I have manually opened. The most taxing program to the CPU is htop itself. I think that the problem must lie elsewhere; my temps are already up to ~65C for the CPU and ~69C for the GPU, with nearly 0% CPU usage.

    Read the article

  • EBS ATG Advisor Webcasts - FREE!

    - by cwarticki
    For June 2012 we have scheduled 2 Webcasts - E-Business Suite OAM Overview and Usage session and the E-Business Suite Workflow Avisor.  We are driving 2 sessions for a better global alignment. E-Business Suite - OAM Overview and Monitoring   Agenda Oracle Applications Manager (OAM) Overview Log files Diagnostics and Logging Concurrent processing through OAM Applications Dashboard Troubleshooting Patch Management. Patch Wizard OAM "How To" Documents Questions &Answers EMEA Session : July 10, 2012 at 09:00 AM UK / 10:00 AM CET / 13:30 India / 17:00 Japan / 18:00 Australia Details & Registration : Note 1466056.1 Direct link to register in WebEx US Session : July 11, 2012 at 18:00 UK / 19:00 CET / 10:00 AM Pacific / 11:00 AM Mountain/ 01:00 PM Eastern Details & Registration : Note 1466057.1 Direct link to register in WebEx E-Business Suite - Workflow Analyzer - Follow-Up Agenda Overview of Workflow Analyzer Enhancements implemented in the latest Release Questions & Answers EMEA Session : July 24, 2012 at 09:00 AM UK / 10:00 AM CET / 13:30 India / 17:00 Japan / 18:00 Australia Details & Registration : Note 1466058.1 Direct link to register in WebEx US Session : July 25, 2012 at 18:00 UK / 19:00 CET / 10:00 AM Pacific / 11:00 AM Mountain/ 01:00 PM Eastern Details & Registration : Note 1466059.1 Direct link to register in WebEx Schedules, recordings and the Presentations of the Advisor Webcast drove under the EBS Applications Technology area can be found in Note 1186338.1. Current Schedules of Advisor Webcast for all Oracle Products can be found on Note 740966.1 Post Presentation Recordings of the Advisor Webcasts for all Oracle Products can be found on Note 740964.1 If you have any question about the schedules or if you have a suggestion for an Advisor Webcast to be planned in future, please send an E-Mail to Ruediger Ziegler.

    Read the article

  • ksoftirqd uses 100% cpu

    - by andy
    I am running 32bit Ubuntu 10.04. A lot of the times ksoftirqd/0 or ksoftirqd/1 start using up 100% CPU for no apparent reason, and I am forced to reboot my laptop. Incidentally this also happens when I maximize my (youtube) videos on Chrome and Fireox, but once I un-maximize the videos the CPU usage goes down to the original levels. Any ideas what it going on? --- Addendum --- dmesg produces a ~2000 line output. I searched for 'error' and 'warning' in the output, and here are the relevant lines (along with some headers): [ 0.000000] Initializing cgroup subsys cpuset [ 0.000000] Initializing cgroup subsys cpu [ 0.000000] Linux version 2.6.32-21-generic (buildd@yellow) (gcc version 4.4.3 (Ubuntu 4.4.3-4ubuntu5) ) #32-Ubuntu SMP Fri Apr 16 08:09:38 UTC 2010 (Ubuntu 2.6.32-21.32-generic 2.6.32.11+drm33.2) [ 0.000000] Command line: BOOT_IMAGE=/boot/vmlinuz-2.6.32-21-generic root=UUID=157dcfda-acd6-4d1b-a6a8-ff9ccff61906 ro quiet splash [ 0.000000] KERNEL supported cpus: [ 0.000000] Intel GenuineIntel [ 0.000000] AMD AuthenticAMD [ 0.000000] Centaur CentaurHauls [ 0.000000] BIOS-provided physical RAM map: [ 24.775546] EXT3-fs warning: mounting fs with errors, running e2fsck is recommended [44920.210518] ata1: SError: { PHYRdyChg CommWake 10B8B Dispar LinkSeq TrStaTrns } [44920.210531] res 40/00:00:f0:4b:7f/00:00:18:00:00/40 Emask 0x10 (ATA bus error) [58673.134623] chrome[20101]: segfault at 7f38bc4ad000 ip 00007f38be769ecc sp 00007fff24616850 error 4 in libpepflashplayer.so[7f38bdc08000+e55000] [ 24.775546] EXT3-fs warning: mounting fs with errors, running e2fsck is recommended [44920.210531] res 40/00:00:f0:4b:7f/00:00:18:00:00/40 Emask 0x10 (ATA bus error)

    Read the article

  • Are short abbreviated method/function names that don't use full words bad practice or a matter of style?

    - by Alb
    Is there nowadays any case for brevity over clarity with method names? Tonight I came across the Python method repr() which seems like a bad name for a method to me. It's not an English word. It apparently is an abbreviation of 'representation' and even if you can deduce that, it still doesn't tell you what the method does. A good method name is subjective to a certain degree, but I had assumed that modern best practices agreed that names should be at least full words and descriptive enough to reveal enough about the method that you would easily find one when looking for it. Method names made from words help let your code read like English. repr() seems to have no advantages as a name other than being short and IDE auto-complete makes this a non-issue. An additional reason given in an answer is that python names are brief so that you can do many things on one line. Surely the better way is to just extract the many things to their own function, and repeat until lines are not too long. Are these just a hangover from the unix way of doing things? Commands with names like ls, rm, ps and du (if you could call those names) were hard to find and hard to remember. I know that the everyday usage of commands such as these is different than methods in code so the matter of whether those are bad names is a different matter.

    Read the article

  • Will new Twitter API 1.1 allow hashtag/tweet/trend queries without any authentication, i.e. for a client that does not use an user's account at all?

    - by P5music
    I see that, even not being logged in Twitter with an account, if I google hashtags or twitter accounts, twitter show them. I think it should be also possible to get those tweets programmatically but I do not know it for sure, so I ask for confirmation here, especially for the future with the new Twitter API resctrictions. I mean, will it be possible to get tweets from hashtags or accounts without logging in an user account, and so not wanting to access the user settings, subscriptions, etc (because I do not need it), thus not having to respect any token limit? I found these API 1.1 faqs, have I to be concerned? Will an application have to request user authorization just to make public API calls? When API v1.1 is released, user authorization (and access tokens) are required for all API 1.1 requests. In the weeks following release, some methods will require only application-based authentication for certain "userless" contexts. Will an application have to request user authorization just to make public API calls? When API v1.1 is released, user authorization (and access tokens) are required for all API 1.1 requests. In the weeks following release, some methods will require only application-based authentication for certain "userless" contexts. Will the Search API require authentication? The Search API is now part of the official REST API in version 1.1. In addition to serving results in a format consistent with other Tweet resources, usage will also require authentication.

    Read the article

  • How to optimize calls to multiple APIs at once and return as one set?

    - by Martin
    I have a web app that searches across 2 APIs right now. I have my own Restful web service that I call, and it does all the work on the backend to asynchronously call the 2 APIs and concatenate them into one result set for my web app to use. I want to scale this out and add as many other APIs as I can (currently looking at about 10 more). But as I add APIs, the call to my service gets (potentially) slower and more complex. How do I handle one API not responding ... and other issues that arise? What would be the best way to approach this? Should I create a service call for each API, that way each one is independent and not coupled to all the other calls? Is there a way on the backend to handle the multiple API calls without all the extra complexity it adds? If I go the route of a service call per API, now my client code gets more complex (and I have a lot of clients)? And it's more work for the client, and since I have mobile apps, it will cost the client more data usage. If I go one service call, is there a way to set up some sort of connection so I can return data as I get it, in case one service call hangs?

    Read the article

  • sudoers - simple explanation requested

    - by Redsandro
    Everytime I want to be able to run something that requires me to be a sudoer too many times, I need to google for the formatting of /etc/sudoers to remind me again what exactly is the proper way to write it. Now I see different writing styles in my sudoers file, which is the consequence of different google results over the months. I've also noticed that the second example (below) seems to work in XFCE, but not in Cinnamon (Gnome 3). This could be totally unrelated, but nontheless I'd like to know once and for all, what is the correct grammar of the sudoer line, and what is the difference between the given examples? redsandro ALL=NOPASSWD:/path/to/command redsandro ALL=(ALL) NOPASSWD:/path/to/command redsandro ALL=(ALL:ALL) NOPASSWD:/path/to/command Also, what are all the ALL's for? One user, one command, yet I need to use the ALL keyword up to three times? Am I doing this wrong? Of course, omitting NOPASSWD: makes you enter your password before you are permitted to run the command, but one point of confusion is the usage of = and :, for the final command that is the subject of the line can be prepended by either =, :, , or ), confusing grammar for similar semantics.

    Read the article

  • Provide an OnChange event for an internal property which is controlled externally?

    - by NGLN
    For fun and by request I am updating this ImageGrid component, a kind of listbox for images that has a FileNames property of type TStrings. For ease of writing, I have been misusing its FileNames.Objects property for bitmap storage. But since the TStrings type suggests that users of the component could or would want to use the Objects property for custom data, e.g. like TListBox.Items, I am rewriting the component to store the bitmaps elsewhere and leave FileNames.Objects untouched for unknown future usage. Now I am wondering whether to provide an OnChange event. And if so, whether to fire it when one or more FileNames.Objects changes. Trying to answer it myself, I dove in Delphi's own VCL and stumbled on: TMemo: has an OnChange event, but ignores Lines.Objects TListBox: has no OnChange event, but is capable of storing Items.Objects TStringGrid: has no OnChange event, but is capable of storing Objects, Rows.Objects, Cols.Objects So now I am somewhat puzzeled, because I cannot imagine Borland's developers didn't add events for several Objects properties out of ease. Sure, when a user changes a FileNames.Object in my component, he knows he does and could implement appropriate interaction himself. But wouldn't it be convenient when the component does automatically? What would you expect from this component in this regard?

    Read the article

  • A generic Re-usable C# Property Parser utility [on hold]

    - by Shyam K Pananghat
    This is about a utility i have happened to write which can parse through the properties of a data contracts at runtime using reflection. The input required is a look like XPath string. since this is using reflection, you dont have to add the reference to any of your data contracts thus making pure generic and re- usable.. you can read about this and get the full c# sourcecode here. Property-Parser-A-C-utility-to-retrieve-values-from-any-Net-Data-contracts-at-runtime Now about the doubts which i have about this utility. i am using this utility enormously i many places of my code I am using Regex repeatedly inside a recursion method. does this affect the memmory usage or GC collection badly ?do i have to dispose this manually. if yes how ?. The statements like obj.GetType().GetProperty() and obj.GetType().GetField() returns .net "object" which makes difficult or imposible to introduce generics here. Does this cause to have any overheads like boxing ? on an overall, please suggest to make this utility performance efficient and more light weight on memmory

    Read the article

  • Hosting and scaling a Facebook application in the cloud? [migrated]

    - by DhruvPathak
    We would be building a Facebook application in Django (Python), but still not sure of where to host it economically, and with a good provision to scale in case the app gets viral. Some details about the app: Would be HTML based like a website,using django as a framework. 100K is the number of expected pageviews in a day, if the app is viral. The users will not generate any media content, only some database data will be generated by them. It would be great if someone with more experience can guide on following points: A) Hosting on Google app engine or Amazon EC2 or some other cloud like RackSpace : Preferable points found in AppEngine were ease of deployment, cost effectiveness and easy scaling. For EC2: Full hold of the virtual machine,Amazon NoSQL and RDMBS database services in case we decide to use them. B) Does backend technology affect monthly cost? eg. would CPU and memory usage difference of Django over , for example , PHP framework like CodeIgnitor really make remarkable difference in running costs. (Here is the article that triggered this thought process : http://journal.dedasys.com/2010/01/12/rough-estimates-of-the-dollar-cost-of-scaling-web-platforms-part-i#comments) C) Does something like Heroku , which provides additional services over Amazon EC2, prove to be better than raw cloud management? It is not that we are trying for premature scaling, we just want to have a good start so that we are ready to handle unpredicted growth and scale.

    Read the article

  • What's wrong with cplusplus.com?

    - by Kerrek SB
    This is perhaps not a perfectly suitable forum for this question, but let me give it a shot, at the risk of being moved away. There are several references for the C++ standard library, including the invaluable ISO standard, MSDN, IBM, cppreference, and cplusplus. Personally, when writing C++ I need a reference that has quick random access, short load times and usage examples, and I've been finding cplusplus.com pretty useful. However, I've been hearing negative opinions about that website frequently here on SO, so I would like to get specific: What are the errors, misconceptions or bad pieces of advice given by cplusplus.com? What are the risks of using it to make coding decisions? Let me add this point: I want to be able to answer questions here on SO with accurate quotes of the standard, and thus I would like to post immediately-usable links, and cplusplus.com would have been my choice site were it not for this issue. Update: There have been many great responses, and I have seriously changed my view on cplusplus.com. I'd like to list a few choice results here; feel free to suggest more (and keep posting answers). As of June 29, 2011: Incorrect description of some algorithms (e.g. remove). Information about the behaviour of functions is sometimes incorrect (atoi), fails to mention special cases (strncpy), or omits vital information (iterator invalidation). Examples contain deprecated code (#include style). Inexact terminology is doing a disservice to learners and the general community ("STL", "compiler" vs "toolchain"). Incorrect and misleading description of the typeid keyword.

    Read the article

  • End of Public Updates for Java SE 6

    - by Tori Wieldt
    It's important for developers and systems administrators to either make the transition over to Java SE 7 or to work with Oracle to get updates via the Java SE Support program. Have you updated to Java SE 7? Along with great features (Fork/Join, NIO, Project Coin), Java SE 7 is being updated and patched regularly. Java SE 7 has been out for over a year and is ready to download.  The last publicly available release of Oracle JDK 6 is to be released in February, 2013. This means that after 19 February 2013, all new security updates, patches and fixes for Java SE 6 and Java SE 5 will only be available through My Oracle Support and will thus require a commercial license with Oracle.    In the event you are not ready to migrate to Java SE 7, Oracle offers: Java SE Support for continued access to critical bug fixes and security fixes as well as general maintenance for JDK 6. Additionally, Java SE Advanced and Suite offers superior diagnostics and manageability tools that minimize the costs of deployment, monitoring and maintenance of Java-based IT environments. The Java SE Support Roadmap reflects an updated timeline for the End of Public Updates for JDK 6. The End of Public Updates date has been extended from November 2012 to February 2013, to allow some more time for the transition to JDK 7. Older releases of Java SE 6 will still be available on the Java SE archive, but will require a commercial license with Oracle for any new security updates, patches and fixes.  Th End of Public Updates for Java SE 6 will not impact the usage, availability, patching of Java SE 6 used for Fusion Middleware 11g and 12c. The support schedule for Java SE used for and in Fusion Middleware is not impacted by this announcement. For More Information Visit the Java SE page on Oracle.com.

    Read the article

  • Where to place php libraries/extensions?

    - by gdaniel
    I am new to a lot of server configurations and options. I want to add extra php libraries/extensions to my server. Where do I add them? (I am on a CENTOS 6.5 VPS) For example, I want to add the phpseclib php extension: Their website instructions are minimum: Usage This library is written using the same conventions that libraries in the PHP Extension and Application Repository (PEAR) used to be written in (current requirements break PHP4 compatibility). In particular, this library needs to be in your include_path: <?php set_include_path(get_include_path() . PATH_SEPARATOR . 'phpseclib'); include('Net/SSH2.php'); ?> It tells me how to use it, but it doesn't tell me where to add the actual extension files. Should I added it under? usr/local/lib ? usr/local/lib/php ? usr/local/lib/php/pear ? Or can I add it under public_html? Also, my VPS has several users under /home/.. is that away to make the library available for only one user?

    Read the article

  • GLES2.0 3D Android game performance and multi threading the update?

    - by Ofer
    I have profiled my mixed Java\C++ Android game and I got the following result: https://dl.dropbox.com/u/8025882/PompiDev/AndroidProfile.png As you can see, the pink think is a C++ functions that updates the game. It does things like updating the logic but it mostly it generates a "request list" for rendering. The thing is, I generate DrawLists on C++ and then send them to Java to process and draw using GLES2.0. Since then I was able to improve update from 9ms down to about 7ms, but I would like to ask if I would benefit from multi threading the update? As I understand from that diagram is that the function that takes the most time is the one you see it's color on the timeline. So the pink area is taken mostly by update. The other area has MainOpenGL.Handle as it's main contributor(whch is my java function), but since it's not drawn to the top of the diagram I can conclude other things are happening at the same time that use the CPU? Or even GPU stuff that isn't shown in this diagram. I am not sure how the GPU works on this. Does it calculate stuff in parallel to the CPU? Or is it part of the CPU usage as in SoC? I am not sure. Anyway, in case GPU things DO happen in parallel to CPU, then I would guess that if I do this C++ Update in parallel to the thread that makes the OpenGL calls, I might make use of "dead CPU time" due to GPU stalling or maybe have the GPU calls getting processed earlier because it won't have to wait for Update to finish? How do you suggest to improve performance based on that? Thanks.

    Read the article

< Previous Page | 414 415 416 417 418 419 420 421 422 423 424 425  | Next Page >