Search Results

Search found 6078 results on 244 pages for 'processing'.

Page 105/244 | < Previous Page | 101 102 103 104 105 106 107 108 109 110 111 112  | Next Page >

  • Can i create a SDL_Surface as i do with Allegro?

    - by Petris Rodrigo Fernandes
    First of all, I'm sorry about my english (Isn't my native language) Using allegro I can create a Bitmap to draw just doing: BITMAP* bmp = NULL; bmp = create_bitmap(width,height); // I don't remember exactly the parameters I'm using SDL now, and i want create a SDL_Surface to draw the game level (that is static) creating a SDL_Surface, drawing the tiles on it, then i just blit this surface to the screen instead of keep drawing the tiles directly on screen (i believe this will require more processing); There a way to create a blank SDL_Surface as i did with Allegro just do draw before blit it?

    Read the article

  • Actor based concurrency and cancellation

    - by Akash
    I'm reading about actor based concurrency and I appreciate the simplicity of actors sequentially processing messages on a single thread. However there is one scenario that doesn't seen possible. Suppose that actor A sends a message to actor B, who then performs some long running task and returns a completion message to actor A. How can actor A force actor B to cancel the long running task after it has started? If actor B is running the task in its message queue thread, it won't pick up the cancellation message until it had completed the task; if actor B runs the task in a background thread then it seems to be violating the principle of actors. Is there a common way that this scenario is handled with actors? Or does each actor language/framework take a different approach? Or is this not a suitable problem to tackle via actors?

    Read the article

  • Disable ATI Radeon graphics card and use intel graphics (switcheroo unavailable)

    - by user92356
    So I have a HP Envy with ATI Radeon 5450 + intel switchable graphics. I think (though I'm not sure) that the Radeon is running right now on Ubuntu 12.04 (because my laptop is making too much noise when I'm doing something non gpu intensive like word processing, or web browsing). So what I want to do is disable the ATI Radeon and use the Intel instead. I looked around and it seems all the solutions use switcheroo, but I dont have it on my computer! I think this happened because I tried installing the proprietary driver (fglrx). Any and all help is 200% appreciated, thank you

    Read the article

  • How do you handle measuring Code Coverage in JavaScript

    - by Dancrumb
    In order to measure Code Coverage for JavaScript unit tests, one needs to instrument the code, run the tests and then perform post-processing. My concern is that, as a result, you are unit testing code that will never be run in production. Since JavaScript isn't compiled, what you test should be precisely what you execute. So here's my question, how do you handle this? One thought I had was to run Unit Testing on the production code and use that for my pass fail. I would then create a shadow of my production code, with instrumentation and run my unit tests again; this would give me my code coverage stats. Has anyone come across a method that is a little more graceful than this?

    Read the article

  • Installing Ubuntu

    - by Mister AR
    i got a problem when I wanted to installing ubuntu 12.04 on a VMWare system on my Windows 7 x64 system ... in the end of installing after retrieving Files it stopped and didn't move forward... additionally i got a another problem there where i wanted to installing packages i updated. and gave me error below : installArchives() failed: Error in function: Setting up libssl1.0.0 (1.0.1-4ubuntu5.2) ... locale: Cannot set LC_CTYPE to default locale: No such file or directory locale: Cannot set LC_MESSAGES to default locale: No such file or directory locale: Cannot set LC_ALL to default locale: No such file or directory debconf: DbDriver "config": /var/cache/debconf/config.dat is locked by another process: Resource temporarily unavailable dpkg: error processing libssl1.0.0 (--configure): subprocess installed post-installation script returned error exit status 1 PLz help me soon ! tY all...

    Read the article

  • How to recover data from NTFS partition that was made into a Swap partition?

    - by Raghav Mehta
    I have extremely important stuff on my windows partition which during the ubuntu 10.10 installation,when it said that I should create something called swap space, I selected it to be a swap space (without even knowing what it actually meant) The Grub2 doesn't show up so I don't get a choice to boot Ubuntu or Windows. I don't get my windows partition as a removable device in Ubuntu either. When I go to disk utility and select the sda2 (i.e.. my windows partition) and click edit partition and select HPFS/NTFS for the type and tick bootable and click OK the small processing sign keep on rotating on the bottom right of the sda2 in the chart and after about 10 to 15 minutes it gives an unknown error and thus, I am still unable to use my windows. I am even worse than a beginner who doesn't know a thing about Ubuntu so please be patient and help me out.

    Read the article

  • Is there a way to install Ubuntu stripped down without desktop applications?

    - by Nick Berardi
    Just to start off, I know of lubuntu but it really doesn't meet what I am looking for. Basically what I am looking for is the standard Desktop Ubuntu install, but with out all the word processing, multimedia, and games installed. I have seen posts out about how to get the desktop environment running on Ubuntu server, but they seem complicated, and never seem to equal the standard Desktop install. So my question is, is there anyway to tell the standard Desktop install not to install all the applications? Or is there a distro available that leaves all the applications out, and just has the standard desktop look and feel? What I really want this for is, is for development purposes to run on a VM to do Mono development.

    Read the article

  • Best practice Java - String array constant and indexing it

    - by Pramod
    For string constants its usual to use a class with final String values. But whats the best practice for storing string array. I want to store different categories in a constant array and everytime a category has been selected, I want to know which category it belongs to and process based on that. Addition : To make it more clear, I have a categories A,B,C,D,E which is a constant array. Whenever a user clicks one of the items(button will have those texts) I should know which item was clicked and do processing on that. I can define an enum(say cat) and everytime do if clickedItem == cat.A .... else if clickedItem = cat.B .... else if .... or even register listeners for each item seperately. But I wanted to know the best practice for doing handling these kind of problems.

    Read the article

  • Shopping cart for service providers?

    - by uos??
    From my limited exposure, it seems to me that most shopping cart/eCommerce platforms are specifically for products-based retailers. On several occasions now, I've been asked about ecommerce solutions for service providers. That is, it's basically just a single product with payment but no shipping, and highly configurable "product". Any recommendations for a cost-efficient solution (high feature coverage) for such a web platform? Requirements: .NET No/suppressed product catalog A service customization selection form Payment (probably PayPal with accountless credit card processing) Guest purchases (no site account required) Email confirmation Customer service -facing control panel It's hard to search for such a product because I get "web service based ecommerce software" and so on clouding up the results.

    Read the article

  • Did You Know? More online seminars!

    - by Kalen Delaney
    I am in Tucson again, having just recorded two more online workshops to be broadcast by SSWUG. We haven't set the dates yet, but we are thinking about offering a special package deal for the two of them. The topics really are related and I think they would work well together. They are both on aspects of Query Processing. The first was on how to interpret Query Plans and is an introduction to the topic. However, it only includes a discussion of how SQL Server actually processes your queries. For example,...(read more)

    Read the article

  • Using JDBC to asynchronously read large Oracle table

    - by Ben George
    What strategies can be used to read every row in a large Oracle table, only once, but as fast as possible with JDBC & Java ? Consider that each row has non-trivial amounts of data (30 columns, including large text in some columns). Some strategies I can think of are: Single thread and read table. (Too slow, but listed for clarity) Read the id's into ConcurrentLinkedQueue, use threads to consume queue and query by id in batches. Read id's into a JMS queue, use workers to consume queue and query by id in batches. What other strategies could be used ? For the purpose of this question assume processing of rows to be free.

    Read the article

  • How to remove the old driver for Canon MX870 and install a new one?

    - by madjoe
    I am using 12.04. In addition to Canon MX870 printer only shows "Processing" on the status LCD, I'm not sure if I successfully removed the old MX870 driver (I removed it by using Ubuntu Software Center), then I added a new PPA, apt-get update and then: $ sudo add-apt-repository ppa:michael-gruz/canon-trunk $ sudo apt-get update $ sudo apt-get install cnijfilter-mx870series Reading package lists... Done Building dependency tree Reading state information... Done Package cnijfilter-mx870series is not available, but is referred to by another package. This may mean that the package is missing, has been obsoleted, or is only available from another source E: Package 'cnijfilter-mx870series' has no installation candidate How could I resolve this?

    Read the article

  • Dynamic MMap ran out of room when trying to sudo apt-get anything

    - by user1610406
    I was having an error in Update Manager that asks me to do a partial upgrade and it fails. Now I can't sudo apt-get install anything. I tried to fix it, and now I can't sudo apt-get anything. Every time, I get this output: Reading package lists... Error! E: Dynamic MMap ran out of room. Please increase the size of APT::Cache-Limit. Current value: 25165824. (man 5 apt.conf) E: Error occurred while processing libuptimed0 (NewVersion1) E: Problem with MergeList /var/lib/apt/lists/archive.ubuntu.com_ubuntu_dists_lucid_universe_binary-i386_Packages W: Unable to munmap E: The package lists or status file could not be parsed or opened. I have no idea why this is happening or how to fix it, and I fear that if I try something that probably doesn't work that it will make my problem worse. (Just for reference I am currently running 10.04 (Lucid) on my machine.)

    Read the article

  • Data management in unexpected places

    - by Ashok_Ora
    Normal 0 false false false EN-US X-NONE X-NONE Data management in unexpected places When you think of network switches, routers, firewall appliances, etc., it may not be obvious that at the heart of these kinds of solutions is an engine that can manage huge amounts of data at very high throughput with low latencies and high availability. Consider a network router that is processing tens (or hundreds) of thousands of network packets per second. So what really happens inside a router? Packets are streaming in at the rate of tens of thousands per second. Each packet has multiple attributes, for example, a destination, associated SLAs etc. For each packet, the router has to determine the address of the next “hop” to the destination; it has to determine how to prioritize this packet. If it’s a high priority packet, then it has to be sent on its way before lower priority packets. As a consequence of prioritizing high priority packets, lower priority data packets may need to be temporarily stored (held back), but addressed fairly. If there are security or privacy requirements associated with the data packet, those have to be enforced. You probably need to keep track of statistics related to the packets processed (someone’s sure to ask). You have to do all this (and more) while preserving high availability i.e. if one of the processors in the router goes down, you have to have a way to continue processing without interruption (the customer won’t be happy with a “choppy” VoIP conversation, right?). And all this has to be achieved without ANY intervention from a human operator – the router is most likely to be in a remote location – it must JUST CONTINUE TO WORK CORRECTLY, even when bad things happen. How is this implemented? As soon as a packet arrives, it is interpreted by the receiving software. The software decodes the packet headers in order to determine the destination, kind of packet (e.g. voice vs. data), SLAs associated with the “owner” of the packet etc. It looks up the internal database of “rules” of how to process this packet and handles the packet accordingly. The software might choose to hold on to the packet safely for some period of time, if it’s a low priority packet. Ah – this sounds very much like a database problem. For each packet, you have to minimally · Look up the most efficient next “hop” towards the destination. The “most efficient” next hop can change, depending on latency, availability etc. · Look up the SLA and determine the priority of this packet (e.g. voice calls get priority over data ftp) · Look up security information associated with this data packet. It may be necessary to retrieve the context for this network packet since a network packet is a small “slice” of a session. The context for the “header” packet needs to be stored in the router, in order to make this work. · If the priority of the packet is low, then “store” the packet temporarily in the router until it is time to forward the packet to the next hop. · Update various statistics about the packet. In most cases, you have to do all this in the context of a single transaction. For example, you want to look up the forwarding address and perform the “send” in a single transaction so that the forwarding address doesn’t change while you’re sending the packet. So, how do you do all this? Berkeley DB is a proven, reliable, high performance, highly available embeddable database, designed for exactly these kinds of usage scenarios. Berkeley DB is a robust, reliable, proven solution that is currently being used in these scenarios. First and foremost, Berkeley DB (or BDB for short) is very very fast. It can process tens or hundreds of thousands of transactions per second. It can be used as a pure in-memory database, or as a disk-persistent database. BDB provides high availability – if one board in the router fails, the system can automatically failover to another board – no manual intervention required. BDB is self-administering – there’s no need for manual intervention in order to maintain a BDB application. No need to send a technician to a remote site in the middle of nowhere on a freezing winter day to perform maintenance operations. BDB is used in over 200 million deployments worldwide for the past two decades for mission-critical applications such as the one described here. You have a choice of spending valuable resources to implement similar functionality, or, you could simply embed BDB in your application and off you go! I know what I’d do – choose BDB, so I can focus on my business problem. What will you do? /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;}

    Read the article

  • How customers view and interact with a company

    The Harvard Business Review article written by Rayport and Jaworski is aptly titled “Best Face Forward” because it sheds light on how customers view and interact with a company. In the past most business interaction between customers was performed in a face to face meeting where one party would present an item for sale and then the other would decide whether to purchase the item. In addition, if there was a problem with a purchased item then they would bring the item back to the person who sold the item for resolution. One of my earliest examples of witnessing this was when I was around 6 or 7 years old and I was allowed to spend the summer in Tennessee with my Grandparents. My Grandfather had just written a book about the local history of his town and was selling them to his friends and local bookstores. I still remember he offered to pay me a small commission for every book I helped him sell because I was carrying the books around for him. Every sale he made was face to face with his customers which allowed him to share his excitement for the book with everyone. In today’s modern world there is less and less human interaction as the use of computers and other technologies allow us to communicate within seconds even though both parties may be across the globe or just next door. That being said, customers view a company through multiple access points called faces that represent the ability to interact without actually seeing a human face. As a software engineer this is a good and a bad thing because direct human interaction and technology based interaction have both good and bad attributes based on the customer. How organizations coordinate business and IT functions, to provide quality service varies based on each individual business and the goals and directives put in place by its management. According to Rayport and Jaworski, the type of interaction used through a particular access point may lend itself to be people-dominate, machine-dominate, or a combination of both. The method by which a company communicates information through an access point is a strategic choice that relates costs and customer outcomes. To simplify this, the choice is based on what can give the customer the best experience interacting with the company when the cost of the interaction is also a factor. I personally see examples of this every day at work. The company website is machine-dominate with people updating and maintaining information, our groups department is people dominate because most of the customer interaction is done at the customers location and is backed up by machine based data sources, and our sales/member service department is a hybrid because employees work in tandem with machines in order for them to assist customers with signing up or any other issue they may have. The positive and negative aspects of human and machine interfaces are a key aspect in deciding which interface to use when allowing customers to access a company or a combination of the two. Rayport and Jaworski also used MIT professor Erik Brynjolfsson preliminary catalog of human and machine strengths. He stated that humans outperform machines in judgment, pattern recognition, exception processing, insight, and creativity. I have found this to be true based on the example of how sales and member service reps at my company handle a multitude of questions and various situations with a lot of unknown variables. A machine interface could never effectively be able to handle these scenarios because there are too many variables to consider and would not have the built-in logic to process each customer’s claims and needs. In addition, he also stated that machines outperform humans in collecting, storing, transmitting and routine processing. An example of this would be my employer’s website. Customers can simply go online and purchase a product without even talking to a sales or member services representative. The information is then stored in a database so that the customer can always go back and review there order, and access their selected services. A human, no matter how smart they are would never be able to keep track of hundreds of thousands of customers let alone know what they purchased or how much they paid. In today’s technology driven economy every company must offer their customers multiple methods of accessibly in order to survive. The more of an opportunity a company has to create a positive experience for their customers, in my opinion, they more likely the customer will return to that company again. I have noticed this with my personal shopping habits and experiences. References Rayport, J., & Jaworski, B. (2004). Best Face Forward. Harvard Business Review, 82(12), 47-58. Retrieved from Business Source Complete database.

    Read the article

  • Real performance of node.js

    - by uther.lightbringer
    I've got a question concerning node.js performance. There is quite lot of "benchmarks" and a lot of fuss about great performance of node.js. But how does it stand in real world? Not just process empty request at high speed. If someone could try to compare this scenario: Java (or equivalent) server running an application with complex business logic between receiving request and sending response. How would node.js deal with it? If there was need for a lot of JavaScript processing on server side, is node.js really so fast that it can execute JavaScript, and stand a chance against more heavyveight competitors?

    Read the article

  • Adoption of Exadata - Gartner research note

    - by Javier Puerta
    Independent research note by Gartner acknowledges Oracle Exadata Database Machine has achieved significant early adoption and acceptance of its database appliance value proposition. Analyst Merv Adrian looks at some of the main issues that IT professionals have solved as they assess or deploy the Oracle Exadata solution, including: OLTP and DSS workload support workload consolidation increasing performance and scalability demands data compression improvements  Gartner reports clients using Oracle Exadata experienced the following: report significant performance improvements substantial amounts of cache memory which greatly improves processing speed Oracle Advanced Compression providing 2-4X data compression delivering significant reductions in storage requirements and driving shorter times for backup operations Tables compressed with Oracle Advanced Compression automatically recompress as data is added/updated. One client specifically reported consolidating more than 400 applications onto the Oracle Exadata platform Read the full Gartner note

    Read the article

  • Conscience and unconscience from an AI/Robotics POV

    - by Tim Huffam
    Just pondering the workings of the human mind - from an AI/robotics point of view (either of which I know little about)..   If conscience is when you're thinking about it (processing it in realtime)... and unconscience is when you're not thinking about it (eg it's autonomous behaviour)..  would it be fair to say then, that:   - conscience is software   - unconscience is hardware   Considering that human learning is attributed to the number of neural connections made - and repetition is the key - the more the connections, the better one understands the subject - until it becomes a 'known'.   Therefore could this be likened to forming hard connections?  Eg maybe learning would progress from an MCU to FPGA's - therefore offloading realtime process to the hardware (FPGA or some such device)? t

    Read the article

  • One True Event Loop

    - by CyberShadow
    Simple programs that collect data from only one system need only one event loop. For example, Windows applications have the message loop, POSIX network programs usually have a select/epoll/etc. loop at their core, pure SDL games use SDL's event loop. But what if you need to collect events from several subsystems? Such as an SDL game which doesn't use SDL_net for networking. I can think of several solutions: Polling (ugh) Put each event loop in its own thread, and: Send messages to the main thread, which collects and processes the events, or Place the event-processing code of each thread in a critical section, so that the threads can wait for events asynchronously but process them synchronously Choose one subsystem for the main event loop, and pass events from other subsystems via that subsystem as custom messages (for example, the Windows message loop and custom messages, or a socket select() loop and passing events via a loopback connection). Option 2.1 is more interesting on platforms where message-passing is a well-developed threading primitive (e.g. in the D programming language), but 2.2 looks like the best option to me.

    Read the article

  • Will Google Analytics track URLs that just redirect?

    - by Derick Bailey
    I have a link on my site. That links goes to another URL on my site. The code on the server sees that resource being requested and redirects the browser to another website. Will Google Analytics be able to know that the user requested the URL from my server and was redirected? Specifically, I set up a /buy link on my watchmecode.net site to try and track who is clicking the "Buy & Download" button. This link/button hits my server, and my server immediately does a redirect to the PayPal processing so the user can buy the screencast. Is Google Analytics going to know that the user hit the /buy URL on my site, and track that for me? If not, what can I do to make that happen?

    Read the article

  • How do I get a Canon LBP5000 Printer working?

    - by Saigun
    I have unsuccessfully attempted to install a Canon LBP5000 printer on Ubuntu 11.10. I have attempted all possible methods to be found on the web, but nothing seems to work. My latest attempt was Radu Cotescu's script from http://radu.cotescu.com/how-to-install-canon-lbp-printers-in-ubuntu/ Using the script everything appears to work as described during the installation process, but when attempting to actually print, it remains stuck in “processing” (regardless of what I attempt to print) [There is no additional error message]. Could anyone help me? It would be very much appreciated!

    Read the article

  • Is there a way to add Google Docs-like comments to any web page?

    - by Sean
    You know the comments on Google Docs word processing documents? And how it creates a little discussion over in the right-hand margin? I love it. Great for collaboration. I want to free it from Google Docs so I can use it with clients to discuss mock-ups or scaffolded websites. Searching Google for "add comments [or discussions] to any website" only gets you results for adding blog-like comments (Disqus, JS-Kit, etc.) Anyone know of a solution for what I'm after here?

    Read the article

  • Outside Operations in JD Edwards EnterpriseOne Manufacturing

    - by Amit Katariya
    Upcoming E1 Manufacturing webcasts   Date: March 30, 2010Time: 10:00 am MDTProduct Family: JD Edwards EnterpriseOne Manufacturing   Summary This one-hour session is recommended for functional users who would like to understand the Outside Operations process overview, including Setup, Execution and Troubleshooting.   Topics will include: Concept Setup in context of PDM, SFC, Product Costing, and Manufacturing Accounting Processing Troubleshooting   A short, live demonstration (only if applicable) and question and answer period will be included. Register for this session Oracle Advisor is dedicated to building your awareness around our products and services. This session does not replace offerings from Oracle Global Support Services. Important links related to Webcasts Advisor Webcast Current Schedule Advisor Webcast Archived Recordings Above links requires valid access to My Oracle Support

    Read the article

  • Removing mdadm array and converting to regular disks while preserving data

    - by Jeffrey Kevin Pry
    I have a 6 disk (2TB each) mdadm RAID 5 volume created in Ubuntu 12.04 Server. However, I'm moving to a different solution and want to "unraid" my disks but keep the data. Only 50% is in use. From what I can surmise I basically have to do this recursively for each physical disk. Fail the disk Format the failed disk Move a portion of files to the new disk. Reshape the array Shrink the logical volume md0 This seems like a very time consuming process. Is there an easier way to do this (automatically perhaps) without buying new disks to temporarily hold the data? I am also aware that during this processing my RAID volume will be degraded and vulnerable the entire time. I am not too concerned about this and will be using battery backup and moving the most important files off first. Thank you for your help!

    Read the article

  • OpenGL behaviour depending on the graphics card?

    - by Dan
    This is something that never happened to me before. I have an OpenGL code that uses GLSL shaders to texture a 3D model. The code involves a lot of GPU texture processing, blending, etc... I wanted to check how the performance of my code improves using a faster graphics card (both new and old are NVIDIA, using always the NVIDIA development drivers). But now I have found that once I run the code using the new graphics card, it behaves completely different (the final render looks wrong), probably because some blending effect is not performed correctly. I haven't really look into what has changed, but I am guessing that some OpenGL states are, by default, set different. Is this possible? Have you ever found different OpenGL/GLSL behaviour using different graphics cards? Any "fast" solution? (So far I've thought of plugging back the old one, push all OpenGL default states, and compare with the ones I initially get using the new card..)

    Read the article

< Previous Page | 101 102 103 104 105 106 107 108 109 110 111 112  | Next Page >