Search Results

Search found 7077 results on 284 pages for 'concurrent processing'.

Page 134/284 | < Previous Page | 130 131 132 133 134 135 136 137 138 139 140 141  | Next Page >

  • Identify high CPU consumed thread for Java app

    - by Vincent Ma
    Following java code to emulate busy and Idle thread and start it. import java.util.concurrent.*;import java.lang.*; public class ThreadTest {    public static void main(String[] args) {        new Thread(new Idle(), "Idle").start();        new Thread(new Busy(), "Busy").start();    }}class Idle implements Runnable {    @Override    public void run() {        try {            TimeUnit.HOURS.sleep(1);        } catch (InterruptedException e) {        }    }}class Busy implements Runnable {    @Override    public void run() {        while(true) {            "Test".matches("T.*");        }    }} Using Processor Explorer to get this busy java processor and get Thread id it cost lots of CPU see the following screenshot: Cover to 4044 to Hexadecimal is oxfcc. Using VistulVM to dump thread and get that thread. see the following screenshot In Linux you can use  top -H to get Thread information. That it! Any question let me know. Thanks

    Read the article

  • Is this safe? <a href=http://javascript:...>

    - by KajMagnus
    I wonder if href and src attributes on <a> and <img> tags are always safe w.r.t. XSS attacks, if they start with http:// or https://. For example, is it possible to append javascript: ... to the href and src attribute in some manner, to execute code? Disregarding whether or not the destination page is e.g. a pishing site, or the <img src=...> triggers a terribly troublesome HTTP GET request. Background: I'm processing text with markdown, and then I sanitize the resulting HTML (using Google Caja's JsHtmlSanitizer). Some sample code in Google Caja assumes all hrefs and srcs that start with http:// or https:// are safe -- I wonder if it's safe to use that sample code. Kind regards, Kaj-Magnus

    Read the article

  • Is there really anything to gain with complex design? [duplicate]

    - by SB2055
    This question already has an answer here: What is enterprise software, exactly? 8 answers I've been working for a consulting firm for some time, with clients of various sizes, and I've seen web applications ranging in complexity from really simple: MVC Service Layer EF DB To really complex: MVC UoW DI / IoC Repository Service UI Tests Unit Tests Integration Tests But on both ends of the spectrum, the quality requirements are about the same. In simple projects, new devs / consultants can hop on, make changes, and contribute immediately, without having to wade through 6 layers of abstraction to understand what's going on, or risking misunderstanding some complex abstraction and costing down the line. In all cases, there was never a need to actually make code swappable or reusable - and the tests were never actually maintained past the first iteration because requirements changed, it was too time-consuming, deadlines, business pressure, etc etc. So if - in the end - testing and interfaces aren't used rapid development (read: cost-savings) is a priority the project's requirements will be changing a lot while in development ...would it be wrong to recommend a super-simple architecture, even to solve a complex problem, for an enterprise client? Is it complexity that defines enterprise solutions, or is it the reliability, # concurrent users, ease-of-maintenance, or all of the above? I know this is a very vague question, and any answer wouldn't apply to all cases, but I'm interested in hearing from devs / consultants that have been in the business for a while and that have worked with these varying degrees of complexity, to hear if the cool-but-expensive abstractions are worth the overall cost, at least while the project is in development.

    Read the article

  • Looking for an old classic book about Unix command-line tools

    - by Little Bobby Tables
    I am looking for a book about the Unix command-line toolkit (sh, grep, sed, awk, cut, etc.) that I read some time ago. It was an excellent book, but I totally forgot its name. The great thing about this specific book was the running example. It showed how to implement a university bookkeeping system using only text-processing tools. You would find a student by name with grep, update grades with sed, calculate average grades with awk, attach grades to IDs with cut, and so on. If my memory serve, this book had a black cover, and was published circa 1980. Does anyone remember this book? I would appreciate any help in finding it.

    Read the article

  • EBS ATG Advisor Webcasts - FREE!

    - by cwarticki
    For June 2012 we have scheduled 2 Webcasts - E-Business Suite OAM Overview and Usage session and the E-Business Suite Workflow Avisor.  We are driving 2 sessions for a better global alignment. E-Business Suite - OAM Overview and Monitoring   Agenda Oracle Applications Manager (OAM) Overview Log files Diagnostics and Logging Concurrent processing through OAM Applications Dashboard Troubleshooting Patch Management. Patch Wizard OAM "How To" Documents Questions &Answers EMEA Session : July 10, 2012 at 09:00 AM UK / 10:00 AM CET / 13:30 India / 17:00 Japan / 18:00 Australia Details & Registration : Note 1466056.1 Direct link to register in WebEx US Session : July 11, 2012 at 18:00 UK / 19:00 CET / 10:00 AM Pacific / 11:00 AM Mountain/ 01:00 PM Eastern Details & Registration : Note 1466057.1 Direct link to register in WebEx E-Business Suite - Workflow Analyzer - Follow-Up Agenda Overview of Workflow Analyzer Enhancements implemented in the latest Release Questions & Answers EMEA Session : July 24, 2012 at 09:00 AM UK / 10:00 AM CET / 13:30 India / 17:00 Japan / 18:00 Australia Details & Registration : Note 1466058.1 Direct link to register in WebEx US Session : July 25, 2012 at 18:00 UK / 19:00 CET / 10:00 AM Pacific / 11:00 AM Mountain/ 01:00 PM Eastern Details & Registration : Note 1466059.1 Direct link to register in WebEx Schedules, recordings and the Presentations of the Advisor Webcast drove under the EBS Applications Technology area can be found in Note 1186338.1. Current Schedules of Advisor Webcast for all Oracle Products can be found on Note 740966.1 Post Presentation Recordings of the Advisor Webcasts for all Oracle Products can be found on Note 740964.1 If you have any question about the schedules or if you have a suggestion for an Advisor Webcast to be planned in future, please send an E-Mail to Ruediger Ziegler.

    Read the article

  • Webcast Replay: Extreme Performance for Consolidated Workloads with Oracle Exadata

    - by kimberly.billings
    If you missed our live webcast Extreme Performance for Consolidated Workloads with Oracle Exadata last week, the replay is now available. Watch the free on-demand webcast in which Tim Shetler, Vice President of Oracle Database Product Management, and Richard Exley, Consulting Member of Technical Staff, discuss how Oracle Exadata can help you can significantly improve application performance and reduce infrastructure costs by consolidating transaction processing, data warehousing, or mixed workloads on Oracle Exadata. Note: (1) Turn off pop-up blockers if the slides do not advance automatically. (2) Slides are available for download. var gaJsHost = (("https:" == document.location.protocol) ? "https://ssl." : "http://www."); document.write(unescape("%3Cscript src='" + gaJsHost + "google-analytics.com/ga.js' type='text/javascript'%3E%3C/script%3E")); try { var pageTracker = _gat._getTracker("UA-13185312-1"); pageTracker._trackPageview(); } catch(err) {}

    Read the article

  • Using normals in DirectX 10

    - by Dave
    I've got a working OBJ loader that loads vertices, indices, texture coordinates, and normals. As of right now it doesn't process texture coordinates or normals but it stores them in arrays and creates a valid mesh with the vertices and indices. Now I am trying to figure out how can I make the shader use the correct normal in the array for the current vertex if I can't setnormals() to my mesh. If I were to just use an index in my array of normals corresponding to the index in the vertices, how would I retrieve the current index the shader is processing? BTW: I am trying to write a blinn-phong shader technique. Also when I create the input layout and I've added the semantic NORMAL to it, how would I list the multiple semantics in that single parameter? Would I just separate it with a space? PS: If you need to see any code, just let me know.

    Read the article

  • Proper method to update and draw from game loop?

    - by Lost_Soul
    Recently I've took up the challenge for myself to create a basic 2d side scrolling monster truck game for my little brother. Which seems easy enough in theory. After working with XNA it seems strange jumping into Java (which is what I plan to program it in). Inside my game class I created a private class called GameLoop that extends from Runnable, then in the overridden run() method I made a while loop that handles time and such and I implemented a targetFPS for drawing as well. The loop looks like this: @Override public void run() { long fpsTime = 0; gameStart = System.currentTimeMillis(); lastTime = System.currentTimeMillis(); while(game.isGameRunning()) { currentTime = System.currentTimeMillis(); long ellapsedTime = currentTime - lastTime; if(mouseState.leftIsDown) { que.add(new Dot(mouseState.getPosition())); } entities.addAll(que); game.updateGame(ellapsedTime); fpsTime += ellapsedTime; if(fpsTime >= (1000 / targetedFPS)) { game.drawGame(ellapsedTime); } lastTime = currentTime; } The problem I've ran into is adding of entities after a click. I made a class that has another private class that extends MouseListener and MouseMotionListener then on changes I have it set a few booleans to tell me if the mouse is pressed or not which seems to work great but when I add the entity it throws a CME (Concurrent Modification Exception) sometimes. I have all the entities stored in a LinkedList so later I tried adding a que linkedlist where I later add the que to the normal list in the update loop. I think this would work fine if it was just the update method in the gameloop but with the repaint() method (called inside game.drawGame() method) it throws the CME. The only other thing is that I'm currently drawing directly from the overridden paintComponent() method in a custom class that extends JPanel. Maybe there is a better way to go about this? As well as fix my CME? Thanks in advance!!!

    Read the article

  • How much overhead is there in persistent connections?

    - by nynex
    Ok so I'm musing over a little side project I want to start. Essentially its a multi-session web based FTP client. Multi-session in that you can log into several FTP servers at the same time and perform operations like moving a file from one FTP server to another. I'm doing this mainly to brush up on the new webdev technologies, particularly websockets. I'm using node.js + socket.io to keep a persistent bi-directional connection between the web browser and the web server. The web server will also have persistent connections to each FTP server the user has logged into. So if there are 100 concurrent users each logged into 5 ftp accounts, the web server will have 100 websocket connections + 500 ftp connections. Is servicing 600 connections a lot? I know it depends on the hardware resources of the server but is something like this doable on a budget? Are there more efficient means of doing something like this? I know its unlikely that this project will really get popular but I want it to scale well regardless. Thanks for any help, I've still got a lot to learn.

    Read the article

  • WP: Oracle Multitenant on SuperCluster T5-8: Study of Database Consolidation Efficiency

    - by uwes
    Consolidation in the data center is the driving factor in reducing capital and operational expense in IT today. This is particularly relevant as customers invest more in cloud infrastructure and associated service delivery. Database consolidation is a strategic component in this effort. Oracle Database 12 c introduces Oracle Multitenant , a new database consolidation model in which multiple Pluggable Databases (PDBs) are consolidated within a Container Database (CDB). While keeping many of the isolation aspects of single databases, it allows PDBs to share the system global area (SGA) and background processes of a common CDB . The white paper recently published on OTN: Oracle Multitenant on SuperCluster T5-8: Study of Database Consolidation Efficiency analyzes and quantifies savings in compute resources, efficiencies in transaction processing, and consolidation density of Oracle Multitenant compared to consolidated single instance databases (SIDBs) running in a bare-metal environment.

    Read the article

  • DRY, string, and unit testing

    - by Rodrigue
    I have a recurring question when writing unit tests for code that involves constant string values. Let's take an example of a method/function that does some processing and returns a string containing a pre-defined constant. In python, that would be something like: STRING_TEMPLATE = "/some/constant/string/with/%s/that/needs/interpolation/" def process(some_param): # We do some meaningful work that gives us a value result = _some_meaningful_action() return STRING_TEMPLATE % result If I want to unit test process, one of my tests will check the return value. This is where I wonder what the best solution is. In my unit test, I can: apply DRY and use the already defined constant repeat myself and rewrite the entire string def test_foo_should_return_correct_url(): string_result = process() # Applying DRY and using the already defined constant assert STRING_TEMPLATE % "1234" == string_result # Repeating myself, repeating myself assert "/some/constant/string/with/1234/that/needs/interpolation/" == url The advantage I see in the former is that my test will break if I put the wrong string value in my constant. The inconvenient is that I may be rewriting the same string over and over again across different unit tests.

    Read the article

  • When designing a job queue, what should determine the scope of a job?

    - by Stuart Pegg
    We've got a job queue system that'll cheerfully process any kind of job given to it. We intend to use it to process jobs that each contain 2 tasks: Job (Pass information from one server to another) Fetch task (get the data, slowly) Send task (send the data, comparatively quickly) The difficulty we're having is that we don't know whether to break the tasks into separate jobs, or process the job in one go. Are there any best practices or useful references on this subject? Is there some obvious benefit to a method that we're missing? So far we can see these benefits for each method: Split Job lease length reflects job length: Rather than total of two Finer granularity on recovery: If we lose outgoing connectivity we can tell them all to retry The starting state of the second task is saved to job history: Helps with debugging (although similar logging could be added in single task method) Single Single job to be scheduled: Less processing overhead Data not stale on recovery: If the outgoing downtime is quite long, the pending Send jobs could be outdated

    Read the article

  • New customer references for Exadata-based projects

    - by Javier Puerta
    Milletech Systems, Inc. shows a large state university how to improve query response times 15-fold using its grant management solution built on Oracle’s Extreme Performance Infrastructure. Read More. Ação Informática helped Valdecard realize a 15-Fold Improvement in Fleet-Management and Benefit Card Data Processing using Oracle Exadata Machine. Read More. Neusoft deployed Benxi Municipal Human Resources and Social Security Bureau’s cloud-based database platform to process social insurance payments 50-times faster using Oracle Exadata Database Machine. Read More.

    Read the article

  • BPM Parallel Multi Instance sub processes by Niall Commiskey

    - by JuergenKress
    Here is a very simple scenario: An order with lines is processed. The OrderProcess accepts in an order with its attendant lines. The Fulfillment process is called for each order line. We do not have many order lines, and the processing is simple, so we run this in parallel. Let's look at the definition of the Multi Instance sub-process - Read the full article here. SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Facebook Wiki Mix Forum Technorati Tags: BPM,Niall Commiskey,SOA Community,Oracle SOA,Oracle BPM,Community,OPN,Jürgen Kress

    Read the article

  • How to apply a filter to the screen of a running program?

    - by Shahbaz
    The idea is to take old games without modifying them, but have the graphics card apply a series of filters to their output before sending them to the monitor. A very crude example would be to take a game that has a resolution of 640x480 and do: Increase the resolution to 1280x960 Apply a blur (low pass filter) Apply a sharpen (1 + high pass filter) These steps may not necessarily be the best to improve the visuals of an old game, but there are a lot of techniques that are well-known in image processing for this purpose. The question is, do the (NVidia) graphics cards give the ability to load a program that modifies the screen before sending it to the monitor? If so, how are they called and what terminology should I use to search? I would be comfortable with doing the programming myself if this ability is part of a library. Also, would the solution be different between Windows and Linux? If so, either is fine, since most of the games are probably runnable by wine.

    Read the article

  • NPOT texture and video memory usage

    - by Eonil
    I read in this QA that NPOT will take memory as much as next POT sized texture. It means it doesn't give any benefit than POT texture with proper management. (maybe even worse because NPOT should be slower!) Is this true? Does NPOT texture take and waste same memory like POT texture? I am considering NPOT texture for post-processing, so if it doesn't give memory space benefit, using of NPOT texture is meaningless to me. Maybe answer can be different for each platforms. I am targeting mobile devices. Such as iPhone or Androids. Does NPOT texture takes same amount of memory on mobile GPUs?

    Read the article

  • Is content slowing down your business?

    - by Lance Shaw
    We are living in a digital world, however paper is everywhere and expensive, right? We all agree content is an important part of our organization and contribute to its decision making. However many of us see dealing with this as a challenge and the growth of content is impacting our ability to scale and respond quickly to our customers. Business always has been content intensive. For JD Edwards customers, this is an important consideration.  After all, the processes being run in JD Edwards are usually very critical to the success of your business and if they are not running as smoothly as they should due to manual process steps involving paper or searching for content, you should look into improving them.  To that end, we hope you will join this webinar and learn how Oracle and KPIT | SYSTIME have partnered to help a JD Edwards customer content-enable its enterprise with Oracle WebCenter Content and Oracle WebCenter Imaging 11g and integrate them back with JD Edwards to significantly improve processing speed and operational costs.

    Read the article

  • Configuring MySQL Cluster Data Nodes

    - by Mat Keep
    0 0 1 692 3948 Homework 32 9 4631 14.0 Normal 0 false false false EN-US JA X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:12.0pt; font-family:Cambria; mso-ascii-font-family:Cambria; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Cambria; mso-hansi-theme-font:minor-latin; mso-ansi-language:EN-US;} In my previous blog post, I discussed the enhanced performance and scalability delivered by extensions to the multi-threaded data nodes in MySQL Cluster 7.2. In this post, I’ll share best practices on the configuration of data nodes to achieve optimum performance on the latest generations of multi-core, multi-thread CPU designs. Configuring the Data Nodes The configuration of data node threads can be managed in two ways via the config.ini file: - Simply set MaxNoOfExecutionThreads to the appropriate number of threads to be run in the data node, based on the number of threads presented by the processors used in the host or VM. - Use the new ThreadConfig variable that enables users to configure both the number of each thread type to use and also which CPUs to bind them too. The flexible configuration afforded by the multi-threaded data node enhancements means that it is possible to optimise data nodes to use anything from a single CPU/thread up to a 48 CPU/thread server. Co-locating the MySQL Server with a single data node can fully utilize servers with 64 – 80 CPU/threads. It is also possible to co-locate multiple data nodes per server, but this is now only required for very large servers with 4+ CPU sockets dense multi-core processors. 24 Threads and Beyond! An example of how to make best use of a 24 CPU/thread server box is to configure the following: - 8 ldm threads - 4 tc threads - 3 recv threads - 3 send threads - 1 rep thread for asynchronous replication. Each of those threads should be bound to a CPU. It is possible to bind the main thread (schema management domain) and the IO threads to the same CPU in most installations. In the configuration above, we have bound threads to 20 different CPUs. We should also protect these 20 CPUs from interrupts by using the IRQBALANCE_BANNED_CPUS configuration variable in /etc/sysconfig/irqbalance and setting it to 0x0FFFFF. The reason for doing this is that MySQL Cluster generates a lot of interrupt and OS kernel processing, and so it is recommended to separate activity across CPUs to ensure conflicts with the MySQL Cluster threads are eliminated. When booting a Linux kernel it is also possible to provide an option isolcpus=0-19 in grub.conf. The result is that the Linux scheduler won't use these CPUs for any task. Only by using CPU affinity syscalls can a process be made to run on those CPUs. By using this approach, together with binding MySQL Cluster threads to specific CPUs and banning CPUs IRQ processing on these tasks, a very stable performance environment is created for a MySQL Cluster data node. On a 32 CPU/Thread server: - Increase the number of ldm threads to 12 - Increase tc threads to 6 - Provide 2 more CPUs for the OS and interrupts. - The number of send and receive threads should, in most cases, still be sufficient. On a 40 CPU/Thread server, increase ldm threads to 16, tc threads to 8 and increment send and receive threads to 4. On a 48 CPU/Thread server it is possible to optimize further by using: - 12 tc threads - 2 more CPUs for the OS and interrupts - Avoid using IO threads and main thread on same CPU - Add 1 more receive thread. Summary As both this and the previous post seek to demonstrate, the multi-threaded data node extensions not only serve to increase performance of MySQL Cluster, they also enable users to achieve significantly improved levels of utilization from current and future generations of massively multi-core, multi-thread processor designs. A big thanks to Mikael Ronstrom, Senior MySQL Architect at Oracle, for his work in developing these enhancements and best practices. You can download MySQL Cluster 7.2 today and try out all of these enhancements. The Getting Started guides are an invaluable aid to quickly building a Proof of Concept Don’t forget to check out the MySQL Cluster 7.2 New Features whitepaper to discover everything that is new in the latest GA release

    Read the article

  • it started with a broken package

    - by Inteloidl.com
    all started when i tried to updated to the latest lts, but the problen is that the update interumpted and some broken packages apeared, i used synaptics to repair, but it wont repair, iv tried some console from some forum topic ... and in the end when i try to open synaptic or update managre this is what apears any ideas : E: Dynamic MMap ran out of room. Please increase the size of APT::Cache-Limit. Current value: 25165824. (man 5 apt.conf) E: Error occurred while processing mantis (NewVersion1) E: Problem with MergeList /var/lib/apt/lists/c.archive.ubuntu.com_ubuntu_dists_lucid_universe_binary-i386_Packages E: The package lists or status file could not be parsed or opened. E: _cache-open() failed, please report. W: Unable to munmap

    Read the article

  • Broken package ubuntuone-control-panel-qt

    - by baale7
    An interrupted installation of updates resulted in ubuntuone-control-panel-qt breaking and now I cannot install or update anything. There seems to be an error in python2.7 - ImportError: No module named _sysconfigdata_nd dpkg: error processing archive ubuntuone-control-panel-qt_13.09-0ubuntu1_all.deb (--install): When I try to reinstall the package via .deb a dpkg --audit yields: The following packages are in a mess due to serious problems during installation. They must be reinstalled for them (and any packages that depend on them) to function properly: ?python-pexpect Python module for automating interactive applications ?ubuntuone-control-panel-qt Ubuntu One Control Panel - Qt frontend ?hplip-data HP Linux Printing and Imaging - data files ?python-libxml2 Python bindings for the GNOME XML library ?apport automatically generate crash reports for debugging The following packages are only half configured, probably due to problems configuring them the first time. The configuration should be retried using dpkg --configure or the configure menu option in dselect: ?python-ubuntuone-client Ubuntu One client Python libraries ?python-ubuntuone-control-panel Ubuntu One Control Panel - Python Libraries --configure or --configure -a doesn't work Any help?

    Read the article

  • Coherence Query Performance in Large Clusters

    - by jpurdy
    Large clusters (measured in terms of the number of storage-enabled members participating in the largest cache services) may introduce challenges when issuing queries. There is no particular cluster size threshold for this, rather a gradually increasing tendency for issues to arise. The most obvious challenges are that a client's perceived query latency will be determined by the slowest responder (more likely to be a factor in larger clusters) as well as the fact that adding additional cache servers will not increase query throughput if the query processing is not compute-bound (which would generally be the case for most indexed queries). If the data set can take advantage of the partition affinity features of Coherence, then the application can use a PartitionedFilter to target a query to a single server (using partition affinity to ensure that all data is in a single partition). If this can not be done, then avoiding an excessive number of cache server JVMs will help, as will ensuring that each cache server has sufficient CPU resources available and is also properly configured to minimize GC pauses (the most common cause of a slow-responding cache server).

    Read the article

  • Introduction to JBatch

    - by reza_rahman
    It seems batch processing is moving more and more into the realm of the Java developer. In recognition of this fact, JBatch (aka Java Batch, JSR 352, Batch Applications for the Java Platform) was added to Java EE 7. In a recent article JBatch specification lead Chris Vignola of IBM provides a high level overview of the API. He discusses the core concepts/motivation, the Job Specification Language, the reader-processor-writer pattern, job operator, job repository, chunking, packaging, partitions, split/flow and the like. You can also check out the official specification yourself or try things out with the newly released Java EE 7 SDK.

    Read the article

  • The ETL from Hell - Diagnosing Batch System Performance Issues

    Too often, the batch systems that underlie a lot of database processing just grow without conscious design. When runs start to extend beyond their allotted time, and tuning no longer solves the problem, it is often discovered that batches are run in series, with draconian error handling. It is time to impose some rational design, and Nigel is a seasoned healer of batch processes. The seven tools in the SQL DBA Bundle support your core SQL Server database administration tasks.Make backups a breeze! Enjoy trouble-free troubleshooting! Make the most of monitoring! Download a free trial now.

    Read the article

  • Printing Problem on Shared Printer

    - by paulus_almighty
    I'm using Ubuntu 11.10 on two machines. One has a USB printer connected. I want to share this printer with the other over the wireless network. Under Printing > Server > Settings: I have Publish shared printers connected to this system and Allow printing from the internet enabled. All other checkboxes are disabled. When I right click on the printer icon, I can see enabled and shared are ticked. On my other machine, I can now click on add network printer. It finds the machine name of the server and correctly identifies the printer. However, print test page fails. Under printer properties the printer state says: Processing - Unable to connect to printer; will retry in 30 seconds... What's gone wrong?

    Read the article

  • Lightweight Ubuntu

    - by Nick Berardi
    Just to start off, I know of lubuntu but it really doesn't meet what I am looking for. Basically what I am looking for is the standard Desktop Ubuntu install, but with out all the word processing, multimedia, and games installed. I have seen posts out about how to get the desktop environment running on Ubuntu server, but they seem complicated, and never seem to equal the standard Desktop install. So my question is, is there anyway to tell the standard Desktop install not to install all the applications? Or is there a distro available that leaves all the applications out, and just has the standard desktop look and feel? What I really want this for is, is for development purposes to run on a VM to do Mono development.

    Read the article

< Previous Page | 130 131 132 133 134 135 136 137 138 139 140 141  | Next Page >