Search Results

Search found 24253 results on 971 pages for 'multiple monitor'.

Page 510/971 | < Previous Page | 506 507 508 509 510 511 512 513 514 515 516 517  | Next Page >

  • Orchestrating the Virtual Enterprise, Part I

    - by Kathryn Perry
    A guest post by Jon Chorley, Oracle's Chief Sustainability Officer & Vice President, SCM Product Strategy During the American Industrial Revolution, the Ford Motor Company did it all. It turned raw materials into a showroom full of Model Ts. It owned a steel mill, a glass factory, and an automobile assembly line. The company was both self-sufficient and innovative and went on to become one of the largest and most profitable companies in the world. Nowadays, it's unusual for any business to follow this vertical integration model because its much harder to be best in class across such a wide a range of capabilities and services. Instead, businesses focus on their core competencies and outsource other business functions to specialized suppliers. They exchange vertical integration for collaboration. When done well, all parties benefit from this arrangement and the collaboration leads to the creation of an agile, lean and successful "virtual enterprise." Case in point: For Sun hardware, Oracle outsources most of its manufacturing and all of its logistics to third parties. These are vital activities, but ones where Oracle doesn't have a core competency, so we shift them to business partners who do. Within our enterprise, we always retain the core functions of product development, support, and most of the sales function, because that's what constitutes our core value to our customers. This is a perfect example of a virtual enterprise.  What are the implications of this? It means that we must exchange direct internal control for indirect external collaboration. This fundamentally changes the relative importance of different business processes, the boundaries of security and information sharing, and the relationship of the supply chain systems to the ERP. The challenge is that the systems required to support this virtual paradigm are still mired in "island enterprise" thinking. But help is at hand. Developments such as the Web, social networks, collaboration, and rules-based orchestration offer great potential to fundamentally re-architect supply chain systems to better support the virtual enterprise.  Supply Chain Management Systems in a Virtual Enterprise Historically enterprise software was constructed to automate the ERP - and then the supply chain systems extended the ERP. They were joined at the hip. In virtual enterprises, the supply chain system needs to be ERP agnostic, sitting above each of the ERPs that are distributed across the virtual enterprise - most of which are operating in other businesses. This is vital so that the supply chain system can manage the flow of material and the related information through the multiple enterprises. It has to have strong collaboration tools. It needs to be highly flexible. Users need to be able to see information that's coming from multiple sources and be able to react and respond to events across those sources.  Oracle Fusion Distributed Order Orchestration (DOO) is a perfect example of a supply chain system designed to operate in this virtual way. DOO embraces the idea that a company's fulfillment challenge is a distributed, multi-enterprise problem. It enables users to manage the process and the trading partners in a uniform way and deliver a consistent user experience while operating over a heterogeneous, virtual enterprise. This is a fundamental shift at the core of managing supply chains. It forces virtual enterprises to think architecturally about how best to construct their supply chain systems. In my next post, I will share examples of companies that have made that shift and talk more about the distributed orchestration process.

    Read the article

  • Protecting Consolidated Data on Engineered Systems

    - by Steve Enevold
    In this time of reduced budgets and cost cutting measures in Federal, State and Local governments, the requirement to provide services continues to grow. Many agencies are looking at consolidating their infrastructure to reduce cost and meet budget goals. Oracle's engineered systems are ideal platforms for accomplishing these goals. These systems provide unparalleled performance that is ideal for running applications and databases that traditionally run on separate dedicated environments. However, putting multiple critical applications and databases in a single architecture makes security more critical. You are putting a concentrated set of sensitive data on a single system, making it a more tempting target.  The environments were previously separated by iron so now you need to provide assurance that one group, department, or application's information is not visible to other personnel or applications resident in the Exadata system. Administration of the environments requires formal separation of duties so an administrator of one application environment cannot view or negatively impact others. Also, these systems need to be in protected environments just like other critical production servers. They should be in a data center protected by physical controls, network firewalls, intrusion detection and prevention, etc Exadata also provides unique security benefits, including a reducing attack surface by minimizing packages and services to only those required. In addition to reducing the possible system areas someone may attempt to infiltrate, Exadata has the following features: 1.    Infiniband, which functions as a secure private backplane 2.    IPTables  to perform stateful packet inspection for all nodes               Cellwall implements firewall services on each cell using IPTables 3.    Hardware accelerated encryption for data at rest on storage cells Oracle is uniquely positioned to provide the security necessary for implementing Exadata because security has been a core focus since the company's beginning. In addition to the security capabilities inherent in Exadata, Oracle security products are all certified to run in an Exadata environment. Database Vault Oracle Database Vault helps organizations increase the security of existing applications and address regulatory mandates that call for separation-of-duties, least privilege and other preventive controls to ensure data integrity and data privacy. Oracle Database Vault proactively protects application data stored in the Oracle database from being accessed by privileged database users. A unique feature of Database Vault is the ability to segregate administrative tasks including when a command can be executed, or that the DBA can manage the health of the database and objects, but may not see the data Advanced Security  helps organizations comply with privacy and regulatory mandates by transparently encrypting all application data or specific sensitive columns, such as credit cards, social security numbers, or personally identifiable information (PII). By encrypting data at rest and whenever it leaves the database over the network or via backups, Oracle Advanced Security provides the most cost-effective solution for comprehensive data protection. Label Security  is a powerful and easy-to-use tool for classifying data and mediating access to data based on its classification. Designed to meet public-sector requirements for multi-level security and mandatory access control, Oracle Label Security provides a flexible framework that both government and commercial entities worldwide can use to manage access to data on a "need to know" basis in order to protect data privacy and achieve regulatory compliance  Data Masking reduces the threat of someone in the development org taking data that has been copied from production to the development environment for testing, upgrades, etc by irreversibly replacing the original sensitive data with fictitious data so that production data can be shared safely with IT developers or offshore business partners  Audit Vault and Database Firewall Oracle Audit Vault and Database Firewall serves as a critical detective and preventive control across multiple operating systems and database platforms to protect against the abuse of legitimate access to databases responsible for almost all data breaches and cyber attacks.  Consolidation, cost-savings, and performance can now be achieved without sacrificing security. The combination of built in protection and Oracle’s industry-leading data protection solutions make Exadata an ideal platform for Federal, State, and local governments and agencies.

    Read the article

  • Unity Greeter login screen cuts off login options

    - by ammianus
    I have a pretty newly installed Ubuntu 12.04, using Unity. My external monitor is 1920x1080 max resolution. In the Unity desktop itself everything looks great. I have an NVidia graphics card. When I start my computer and get to the Unity greeter login screen the display is oddly formatted and the resolution seems off. It looks like a zoomed view on the larger 1920x1080 screen. As such it crops the login options off to the left hand side of the screen. So I can only just see the edge of the password box for the user I want to log in with. I can log in with one account by default by blindly typing the password, but I am unable to switch to other accounts. Is there anything I can do to fix the log in screen display so that I can see the normal login options? Note: I first noticed it when I changed my desktop background and the next time I logged in I saw the issue.

    Read the article

  • Removing Barriers to Create Effective Data Models

    After years of creating and maintaining data models, I have started to notice common barriers that decrease the accuracy and usefulness of models. In my opinion, the main causes of these barriers are the lack of knowledge and communication from within a company. The lack of knowledge in regards to data models or data modeling can take many forms. Company Culture Knowledge Whether documented or undocumented, existing business rules of a company can affect how data is modeled. For example, if a company only allows 1 assigned person per customer to be able to manipulate a customer’s record then then a data model that includes an associated table that joins customers and employee’s would be unneeded because that would allow for the possibility of multiple employees to handle a customer because of the potential for a many to many relationship between Customers and Employees. Technical Knowledge Depending on the data modeler’s proficiency in modeling data they can inadvertently cause issues and/or complications with a design without even noticing. It is important that companies share data modeling responsibilities so that the models are developed from multiple perspectives of a system, company and the original problem.  In addition, the tools that a company selects to create data models can also affect the accuracy of the model if designer are not familiar with the tools or the tools are too complex to use for the designer. Existing System Knowledge In order for a data modeler to model data for an existing system so that new changes can be applied to a system then they need to at least know the basic concepts of a system so that they can work within it. This will promote reusability of data and prevent the chance of duplicating data. Project Knowledge This should be pretty obvious, but it is very hard to create an accurate data model without knowing what data needs to be modeled. I have always found it strange that I have been asked to start modeling data prior to a client formalizing any requirements. Usually when this happens I have to make several iterations to a model, and the client still does not know exactly what they want.  In addition additional issues can arise when certain stakeholders of a project are not consulted prior to the design or after the project is over because it can cause miss understandings and confusion by the end user as well as possibly not solving the original problem for which a project is intended to solve. One common thread between each type of knowledge is that they can all be avoided through the use of good communication. For example, if a modeler is new to a company then they should ask older employees about any business specific rules that may be documented or undocumented that must be applied to projects in general. Furthermore, if a modeler is not really familiar with a specific data modeling software then they need to speak up and ask for help form other employees or their manager. This will not only help the modeler in the project, but also help them in future projects that they do for the company. Additionally, if a project is not clearly defined prior to a data modeler being assigned the modeling project then it is their responsibility to communicate with the other stakeholders to clarify any part of a project that is unclear so that the data model that is created is accurately aligned with a project.

    Read the article

  • Unity isn't starting on 13.10 (with Cinnamon 2.0 installed)

    - by Sam Pearman
    Since upgrading to 13.10, I can't log in to unity desktop. Light dm works correctly, but attempting to log in tries to start the session then drops back to light. I've already dropped to terminal (ctrl+alt+f2) and done this: sudo apt-get update sudo apt-get install --reinstall ubuntu-desktop sudo apt-get install unity Logging in as a guest session also fails. Logging in to other window managers works with varying degrees of success. Note: I have Cinnamon 2.0 installed from PPA. I'm using a 2 monitor setup. Also of note is that the session prior to my upgrade to 13.10 the background of unity failed to display at all, instead showing what was there in the screen buffer from the previous frame. The entire OS worked correctly otherwise though, so I just ignored it for the session. No other upgrades or even updates were done prior to this occurring. My upgrade path to 13.10 was basically this: Install 13.04 alongside Windows 7, use ubuntu as a glorified web browser for a while, get updates (in preparation for 13.10), install 13.10. I also used Unity Tweak Tool to change some aspects of unity, particularly auto-hide. Any help or ideas would be appreciated, as I'm typing this on my phone :(

    Read the article

  • How do I make the PolicyKit authentication agent window not dissapear when I enter faulty password in Ubuntu 12.04?

    - by Petar
    As far as I remember in previous versions of Ubuntu, whenever authentication was required and when the PolicyKit authentication agent window was presented, it stayed there even after I would enter a faulty password. But now, whenever I make a mistake, the window is closed immediately. I find this behaviour irritating. For instance I use Synaptic rather frequently, and I prefer to start it using Synapse. I press Ctrl+Space to invoke Synapse, then I enter "syn" (s-shows SMplayer, sy- shows System Monitor) and than I press Enter so that Synaptic is invoked. Then I'm presented with the PolicyKit authentication agent window. As my password is rather complicated - using special characters and big letters, it's easy to make a mistake. If I do make a mistake while typing my password, I'm forced to redo all the previous steps. It's annoying as hell, knowing that this is not the way the PolicyKit authentication agent window behaved before. It used to warn me that the password was not correct and than wait for the correct input. I'm not sure if it allowed trying for the correct password indefinitely, or it was limited to 3 retries which is a much saner behaviour than the current one. I'm using Gnome 3, but the same thing happens in Unity too, although the window looks different.

    Read the article

  • Best Practices - Core allocation

    - by jsavit
    This post is one of a series of "best practices" notes for Oracle VM Server for SPARC (also called Logical Domains) Introduction SPARC T-series servers currently have up to 4 CPU sockets, each of which has up to 8 or (on SPARC T3) 16 CPU cores, while each CPU core has 8 threads, for a maximum of 512 dispatchable CPUs. The defining feature of Oracle VM Server for SPARC is that each domain is assigned CPU threads or cores for its exclusive use. This avoids the overhead of software-based time-slicing and emulation (or binary rewriting) of system state-changing privileged instructions used in traditional hypervisors. To create a domain, administrators specify either the number of CPU threads or cores that the domain will own, as well as its memory and I/O resources. When CPU resources are assigned at the individual thread level, the logical domains constraint manager attempts to assign threads from the same cores to a domain, and avoid "split core" situations where the same CPU core is used by multiple domains. Sometimes this is unavoidable, especially when domains are allocated and deallocated CPUs in small increments. Why split cores can matter Split core allocations can silenty reduce performance because multiple domains with different address spaces and memory contents are sharing the core's Level 1 cache (L1$). This is called false cache sharing since even identical memory addresses from different domains must point to different locations in RAM. The effect of this is increased contention for the cache, and higher memory latency for each domain using that core. The degree of performance impact can be widely variable. For applications with very small memory working sets, and with I/O bound or low-CPU utilization workloads, it may not matter at all: all machines wait for work at the same speed. If the domains have substantial workloads, or are critical to performance then this can have an important impact: This blog entry was inspired by a customer issue in which one CPU core was split among 3 domains, one of which was the control and service domain. The reported problem was increased I/O latency in guest domains, but the root cause might be higher latency servicing the I/O requests due to the control domain being slowed down. What to do about it Split core situations are easily avoided. In most cases the logical domain constraint manager will avoid it without any administrative action, but it can be entirely prevented by doing one of the several actions: Assign virtual CPUs in multiples of 8 - the number of threads per core. For example: ldm set-vcpu 8 mydomain or ldm add-vcpu 24 mydomain. Each domain will then be allocated on a core boundary. Use the whole core constraint when assigning CPU resources. This allocates CPUs in increments of entire cores instead of virtual CPU threads. The equivalent of the above commands would be ldm set-core 1 mydomain or ldm add-core 3 mydomain. Older syntax does the same thing by adding the -c flag to the add-vcpu, rm-vcpu and set-vcpu commands, but the new syntax is recommended. When whole core allocation is used an attempt to add cores to a domain fails if there aren't enough completely empty cores to satisfy the request. See https://blogs.oracle.com/sharakan/entry/oracle_vm_server_for_sparc4 for an excellent article on this topic by Eric Sharakan. Don't obsess: - if the workloads have minimal CPU requirements and don't need anywhere near a full CPU core, then don't worry about it. If you have low utilization workloads being consolidated from older machines onto a current T-series, then there's no need to worry about this or to assign an entire core to domains that will never use that much capacity. In any case, make sure the most important domains have their own CPU cores, in particular the control domain and any I/O or service domain, and of course any important guests. Summary Split core CPU allocation to domains can potentially have an impact on performance, but the logical domains manager tends to prevent this situation, and it can be completely and simply avoided by allocating virtual CPUs on core boundaries.

    Read the article

  • Oracle Enterprise Manager Extensibility News - June 2014

    - by Joe Diemer
    Introducing Extensibility Exchange Version 2 On the heals of Enterprise Manager 12c Release 4 this week comes version 2.0 of the Extensibility Exchange.  A new theme allows optimal viewing on a number of different computing devices from large monitor displays to tablets to smartphones.   One of the first things you'll notice is a scrollable banner with the latest news related to Enterprise Manager and extensibility.  Along with the "slider" and the latest entries from Oracle and the Partner community, new features like a tag cloud and an auto-complete search box provide a better way to find the plug-in, connector or other Enterprise Manager entity you are looking for.  Once you find it, a content details page with specific info related to that particular entity will enable you to access it at the provider's site and also rate and comment on that particular item. You can also send an email from the content details page which is routed to the developer.   And if you want to use version 1 of the Extensibility Exchange instead, you will be able to do so via the "Classic" option.  Check it out today at http://www.oracle.com/goto/emextensibility. Recent Additions from Oracle's Partner Community A number of important 3rd party plug-ins have been contributed by Oracle's partner community, which can be accessed via the Extensibility Exchange or by clicking the links in this blog: Dell Open Manage Fusion I-O ION Accelerator NetApp SANtricity E-Series PostgreSQL by Blue Medora You can also check out the following best practices and labs available via the Exchange: Riverbed Stingray Traffic Manager Reference Architecture Datavail Alert Optimizer Custom Templates Apps Associates' Oracle Enterprise Manager "Test Drives" for Oracle Database 12c Management Oracle Enterprise Manager Monitoring Essentials Oracle Application Management Suite for Oracle E-Business Suite

    Read the article

  • Logging errors caused by exceptions deep in the application

    - by Kaleb Pederson
    What are best-practices for logging deep within an application's source? Is it bad practice to have multiple event log entries for a single error? For example, let's say that I have an ETL system whose transform step involves: a transformer, pipeline, processing algorithm, and processing engine. In brief, the transformer takes in an input file, parses out records, and sends the records through the pipeline. The pipeline aggregates the results of the processing algorithm (which could do serial or parallel processing). The processing algorithm sends each record through one or more processing engines. So, I have at least four levels: Transformer - Pipeline - Algorithm - Engine. My code might then look something like the following: class Transformer { void Process(InputSource input) { try { var inRecords = _parser.Parse(input.Stream); var outRecords = _pipeline.Transform(inRecords); } catch (Exception ex) { var inner = new ProcessException(input, ex); _logger.Error("Unable to parse source " + input.Name, inner); throw inner; } } } class Pipeline { IEnumerable<Result> Transform(IEnumerable<Record> records) { // NOTE: no try/catch as I have no useful information to provide // at this point in the process var results = _algorithm.Process(records); // examine and do useful things with results return results; } } class Algorithm { IEnumerable<Result> Process(IEnumerable<Record> records) { var results = new List<Result>(); foreach (var engine in Engines) { foreach (var record in records) { try { engine.Process(record); } catch (Exception ex) { var inner = new EngineProcessingException(engine, record, ex); _logger.Error("Engine {0} unable to parse record {1}", engine, record); throw inner; } } } } } class Engine { Result Process(Record record) { for (int i=0; i<record.SubRecords.Count; ++i) { try { Validate(record.subRecords[i]); } catch (Exception ex) { var inner = new RecordValidationException(record, i, ex); _logger.Error( "Validation of subrecord {0} failed for record {1}", i, record ); } } } } There's a few important things to notice: A single error at the deepest level causes three log entries (ugly? DOS?) Thrown exceptions contain all important and useful information Logging only happens when failure to do so would cause loss of useful information at a lower level. Thoughts and concerns: I don't like having so many log entries for each error I don't want to lose important, useful data; the exceptions contain all the important but the stacktrace is typically the only thing displayed besides the message. I can log at different levels (e.g., warning, informational) The higher level classes should be completely unaware of the structure of the lower-level exceptions (which may change as the different implementations are replaced). The information available at higher levels should not be passed to the lower levels. So, to restate the main questions: What are best-practices for logging deep within an application's source? Is it bad practice to have multiple event log entries for a single error?

    Read the article

  • How to Get AirVideo Features in Android for Free

    - by Zainul Franciscus
    AirVideo makes it possible for iPhone, iPad, or iPod Touch users to stream any video format on their devices. If you’re an Android user, then you are in luck, because you can get AirVideo’s features for free with VLC-Share. In today’s tutorial, we will start off by giving you an instruction on how to install VLC-Share, followed by configuring firewall and port forwarding, and we complete the tutorial with a walk through of VLC-Share features. Wallpaper available from our Naruto Customization set. Latest Features How-To Geek ETC How To Make Hundreds of Complex Photo Edits in Seconds With Photoshop Actions How to Enable User-Specific Wireless Networks in Windows 7 How to Use Google Chrome as Your Default PDF Reader (the Easy Way) How To Remove People and Objects From Photographs In Photoshop Ask How-To Geek: How Can I Monitor My Bandwidth Usage? Internet Explorer 9 RC Now Available: Here’s the Most Interesting New Stuff Smart Taskbar Is a Thumb Friendly Android Task Launcher Comix is an Awesome Comics Archive Viewer for Linux Get the MakeUseOf eBook Guide to Speeding Up Windows for Free Need Tech Support? Call the Star Wars Help Desk! [Video Classic] Reclaim Vertical UI Space by Adding a Toolbar to the Left or Right Side of Firefox Androidify Turns You into an Android-style Avatar

    Read the article

  • How to reduce the fan noise and how to increase battery life?

    - by mehdi
    I have a brand new Sony Vaio S series laptop.(VPCSA2DGX) It came factory installed with Windows 7 professional Edition 64bit. Runs Intel core i5, 500 GB HDD , 4GB Ram. First I installed ubuntu 11.10 64 bit along side Windows to dual boot. Later,since the problem did not solve, I installed ubuntu 12.04 64bit along side Windows to dual boot. However the problem keeps annoying me. Problem: When running ubuntu 11.10/12.04, the battery lasts only about 1.5 hours. The Fan runs loud and continuously. And there is a lot of heat generated. System monitor shows less than 5% CPU used. My laptop enjoys hybrid graphic and I tried turning off ADM graphic card and keep Intel graphic card on. However I can not get the Fan noise or heat to go away and consequently the battery drain continues. BTW, in windows, the laptop gives 4-5 hours of battery power, Fan is silent and there is no heat problem. Any ideas on how to reduce the fan noise and how to increase battery life in ubuntu 11.10/12.04?

    Read the article

  • Smart Taskbar Is a Thumb Friendly Android Task Launcher

    - by ETC
    If you frequently use your phone one handed you’ll definitely want to check out Smart Taskbar, an add-on for Android phones that makes it easy to launch apps with the swipe of your thumb. Smart Taskbar tucks an application launcher on the side of your screen, out of sight. Swipe your thumb across the screen and it slides out like a dock, revealing five of your favorite apps in a toolbar across the top and your lesser used apps in the main panel below. It’s much easier to swipe to view your applications than it is to peck at the application icon on the home screen; Smart Taskbar is great for one handed launching. Search for “Smart Taskbar” in the Android Market to download a copy or hit up the link below to read more. Smart Taskbar [AppBrain] Latest Features How-To Geek ETC How To Make Hundreds of Complex Photo Edits in Seconds With Photoshop Actions How to Enable User-Specific Wireless Networks in Windows 7 How to Use Google Chrome as Your Default PDF Reader (the Easy Way) How To Remove People and Objects From Photographs In Photoshop Ask How-To Geek: How Can I Monitor My Bandwidth Usage? Internet Explorer 9 RC Now Available: Here’s the Most Interesting New Stuff Smart Taskbar Is a Thumb Friendly Android Task Launcher Comix is an Awesome Comics Archive Viewer for Linux Get the MakeUseOf eBook Guide to Speeding Up Windows for Free Need Tech Support? Call the Star Wars Help Desk! [Video Classic] Reclaim Vertical UI Space by Adding a Toolbar to the Left or Right Side of Firefox Androidify Turns You into an Android-style Avatar

    Read the article

  • Is there any way to kill a zombie process without reboot?

    - by Pedram
    Is there any way to kill a zombie process without reboot?Here is how it happens: I wanted to download a 12GB torrent.After adding the .torrent file, transmission turned into a zombie process.I tried ktorrent too.Same behavior.Finally I could download the file using µTorrent but after closing the program, it turns into a zombie as well. I tried using kill skill and pkill with different options and -9 signal but no success. In some answers in web I found out killing the parent can kill the zombie, but killing wine didn't help either. Is there another way? Edit: ps -o pid,ppid,stat,comm PID PPID STAT COMMAND 7121 2692 Ss bash 7317 7121 R+ ps pstree output: init---GoogleTalkPlugi---4*[{GoogleTalkPlug}] +-NetworkManager---dhclient ¦ +-{NetworkManager} +-acpid +-apache2---5*[apache2] +-atd +-avahi-daemon---avahi-daemon +-bonobo-activati---{bonobo-activat} +-clock-applet +-console-kit-dae---63*[{console-kit-da}] +-cron +-cupsd +-2*[dbus-daemon] +-2*[dbus-launch] +-desktopcouch-se---desktopcouch-se +-explorer.exe +-firefox---run-mozilla.sh---firefox-bin---plugin-containe---8*[{plugin-contain}] ¦ +-14*[{firefox-bin}] +-gconfd-2 +-gdm-binary---gdm-simple-slav---Xorg ¦ ¦ +-gdm-session-wor---gnome-session---bluetooth-apple ¦ ¦ ¦ ¦ +-fusion-icon---compiz---sh---gtk-window-deco ¦ ¦ ¦ ¦ +-gdu-notificatio ¦ ¦ ¦ ¦ +-gnome-panel ¦ ¦ ¦ ¦ +-gnome-power-man ¦ ¦ ¦ ¦ +-gpg-agent ¦ ¦ ¦ ¦ +-nautilus---bash ¦ ¦ ¦ ¦ ¦ +-{nautilus} ¦ ¦ ¦ ¦ +-nm-applet ¦ ¦ ¦ ¦ +-polkit-gnome-au ¦ ¦ ¦ ¦ +-2*[python] ¦ ¦ ¦ ¦ +-qstardict---{qstardict} ¦ ¦ ¦ ¦ +-ssh-agent ¦ ¦ ¦ ¦ +-tracker-applet ¦ ¦ ¦ ¦ +-trackerd ¦ ¦ ¦ ¦ +-wakoopa---wakoopa ¦ ¦ ¦ ¦ ¦ +-3*[{wakoopa}] ¦ ¦ ¦ ¦ +-{gnome-session} ¦ ¦ ¦ +-{gdm-session-wo} ¦ ¦ +-{gdm-simple-sla} ¦ +-{gdm-binary} +-6*[getty] +-gnome-keyring-d---2*[{gnome-keyring-}] +-gnome-screensav +-gnome-settings- +-gnome-system-mo---{gnome-system-m} +-gnome-terminal---bash---ssh ¦ +-bash---pstree ¦ +-gnome-pty-helpe ¦ +-{gnome-terminal} +-gvfs-afc-volume---{gvfs-afc-volum} +-gvfs-fuse-daemo---3*[{gvfs-fuse-daem}] +-gvfs-gdu-volume +-gvfsd +-gvfsd-burn +-gvfsd-http +-gvfsd-metadata +-gvfsd-trash +-hald---hald-runner---hald-addon-acpi ¦ ¦ +-hald-addon-cpuf ¦ ¦ +-hald-addon-inpu ¦ ¦ +-hald-addon-stor ¦ +-{hald} +-hotot---xdg-open ¦ +-3*[{hotot}] +-indicator-apple +-indicator-me-se +-indicator-sessi +-irqbalance +-kded4 +-kdeinit4---kio_http_cache_ ¦ +-klauncher +-kglobalaccel +-knotify4 +-modem-manager +-multiload-apple +-mysqld---10*[{mysqld}] +-named---10*[{named}] +-nmbd +-notification-ar +-notify-osd +-pidgin---{pidgin} +-polkitd +-pulseaudio---gconf-helper ¦ +-2*[{pulseaudio}] +-rsyslogd---2*[{rsyslogd}] +-rtkit-daemon---2*[{rtkit-daemon}] +-services.exe---plugplay.exe---2*[{plugplay.exe}] ¦ +-winedevice.exe---3*[{winedevice.exe}] ¦ +-3*[{services.exe}] +-smbd---smbd +-snmpd +-sshd +-timidity +-trashapplet +-udevd---2*[udevd] +-udisks-daemon---udisks-daemon ¦ +-{udisks-daemon} +-upowerd +-upstart-udev-br +-utorrent.exe---8*[winemenubuilder] ¦ +-{utorrent.exe} +-vnstatd +-winbindd---2*[winbindd] +-2*[winemenubuilder] +-wineserver +-wnck-applet +-wpa_supplicant +-xinetd System monitor and top screenshots which show the zombie process is using resources:

    Read the article

  • How can I give a basic idea of what I'm working on to a non programmer?

    - by Jesse
    As a relatively new programmer (1 year professionally, many years as an amateur) I've run into many situations that sent me running to Stack Overflow for answers that failed my meagre experiences. Tonight I received the hardest question ever. My wife asked me: What are you working on? The questions is deceptive in it's simplicity. A straight forward and truthful answer of "I'm working on a c# class module for monitoring database delivery times" is sure incite suggestion of attempts to confuse. My second instinct was to suggest that it couldn't really be explained to a layperson, after very brief consideration I came to the conclusion that this would likely result in a long and sleepless night on the sofa. The end result was a muddled answer along the lines of "something to monitor automatic things to make sure they're delivered on time". The reception was fairly chilly, I had to make many assurances that I was not insulting her ample intelligence. My question is thus, what is the best way to discuss your work as a programmer with your significant other who is not.

    Read the article

  • Grub menu will not show the first time I try to boot my ubuntu server 12.04 after it is shutdown for a long time

    - by user211477
    I am running into a booting issue after installing Ubuntu Server 12.04 LTS. Following is the symptom of the problem. SYSTEM DESCRIPTION: Dual core AMD Athlon 64 3 Disks: two SATA (out of which one is SSD) and one PATA. Using LVM for disk partition management. /boot is not under LVM rest of the partitions are. / is on the SSD BIOS boot sequence is correct and points to the disk with /boot and boot loader is installed on this disk. SYMPTOMS: POST messages Blinking cursor on first line then moves to second line Screen flickers then becomes black Everything is unresponsive, hard reboot POST messages will not show up on screen. Monitor displays powersave message Force shutdown machine again. Shutoff power to machine for a few minutes. Restart machine. POST message show up. Grub menu shows up Ubuntu server 12.04 boots normally. From now on Ubuntu server boots normally until machine is shutdown for a long time (for example, 30 mins) Repeat steps 1 through 13 once the machine is started after a long time. WHAT DID I TRY? I read several posts and have tried: radeon.modeset=0 setting the gfxmode edd=0 nolacpi boot-repair Nothing seems to work. In my search I did see only one post with this same symptom. Unfortunately, I am not being able to locate that post anymore. The interesting fact is that with this same machine configuration, if I install Ubuntu Desktop 12.04 then everything works fine. Any help will be appreciated.

    Read the article

  • How to Enable User-Specific Wireless Networks in Windows 7

    - by The Geek
    Wireless network settings in Windows 7 are global across all users, but there’s a little-known option that lets you switch them to per-user, so each user has access to only the networks they are allowed to connect to. Here’s how it all works. How is this useful? Maybe you want to prevent a particular user from accessing the internet—if you don’t give them the wireless password, they won’t be able to get online. This could be very useful if you’ve got mini-people playing games on the family PC, but you don’t want them getting online Latest Features How-To Geek ETC How to Enable User-Specific Wireless Networks in Windows 7 How to Use Google Chrome as Your Default PDF Reader (the Easy Way) How To Remove People and Objects From Photographs In Photoshop Ask How-To Geek: How Can I Monitor My Bandwidth Usage? Internet Explorer 9 RC Now Available: Here’s the Most Interesting New Stuff Here’s a Super Simple Trick to Defeating Fake Anti-Virus Malware The Citroen GT – An Awesome Video Game Car Brought to Life [Video] Final Man vs. Machine Round of Jeopardy Unfolds; Watson Dominates Give Chromium-Based Browser Desktop Notifications a Native System Look in Ubuntu Chrome Time Track Is a Simple Task Time Tracker Google Sky Map Turns Your Android Phone into a Digital Telescope Walking Through a Seaside Village Wallpaper

    Read the article

  • JMX Monitoring of GlassFish Servers

    - by tjquinn
    Did you ever wonder what this message in your GlassFish server.log file means? JMXStartupService has started JMXConnector on JMXService URL service:jmx:rmi://192.168.2.102:8686/jndi/rmi://192.168.2.102:8686/jmxrmi It means you can monitor any GlassFish server process, remotely or locally, using any standard Java Management Extensions (JMX) client.  Examples: jconsole or jvisualvm.   Copy the part of the log message that starts with "service:" into the Add JMX Connection dialog of jvisualvm:  or into the New Connection dialog of jconsole: (The full string is truncated in the on-screen display, but if you copied from the server.log and pasted into the form it should all be there.) The examples above are for a DAS, and your host will probably be different.   The server.log files for other GlassFish servers (instances) will have similar log entries giving the JMX connection string to use for those processes.  Look for the host and/or port to be different. Note a few things about security: Here we've assumed you are using the default admin username and password.  If you are not, just enter a valid admin username and password for your installation.  Once connected, you have normal access to all the JVM statistics and controls. You can use JMX clients that support MBeans to view the GlassFish configuration.  When you connect to the DAS, you can also change that configuration, but you can only view configuration when you connect to an instance. To use a JMX client on one system to connect to a GlassFish server running on another system, you need to enable secure admin if you have not already done so: asadmin change-admin-password (respond to the prompts) asadmin enable-secure-admin asadmin restart-domain (as prompted in the output from enable-secure-admin)

    Read the article

  • More Stuff less Fluff

    - by brendonpage
    Originally posted on: http://geekswithblogs.net/brendonpage/archive/2013/11/08/more-stuff-less-fluff.aspxYAGNI – "You Aren't Going To Need It". This is an acronym commonly used in software development to remind developers to only write what they need. This acronym exists because software developers have gotten into the habit of writing everything they need to solve a problem and then everything they think they're going to possibly need in the future. Since we can't predict the future this results in a large portion of the code that we write never being used. That extra code causes unnecessary complexity, which makes it harder to understand and harder to modify when we inevitably have to write something that we didn't think of. I've known about YAGNI for some time now but I never really got it. The words made sense and the idea was clear but the concept never sank in. I was one of those devs who'd happily write a ton of code in the anticipation of future needs. In my mind this was an essential part of writing high quality code. I didn't realise that in doing so I was actually writing low quality code. If you are anything like me you are probably thinking "Lies and propaganda! High quality code needs to be future proof." I agree! But what makes code future proof? If we could see into the future the answer would be simple, code that allows for or meets all future requirements. Since we can't see the future the best we can do is write code that can easily adapt to future requirements, this means writing flexible code. Flexible code is: Fast to understand. Fast to add to. Fast to modify. To be flexible code has to be simple, this means only making it as complex as it needs to be to meet those 3 criteria. That is high quality code. YAGNI! The art is in deciding where to place the seams (abstractions) that will give you flexibility without making decisions about future functionality. Robert C Martin explains it very nicely, he says a good architecture allows you to defer decisions because if you can defer a decision then you have the flexibility to change it. I've recently had a YAGNI experience which brought this all into perspective. I was working on a new project which had multiple clients that connect to a server hosted in the cloud. I was tasked with adding a feature to the desktop client that would allow users to capture items that would then be saved to the cloud. My immediate thought was "Hey we have multiple clients so I should build a web service for these items, that way we can access them from other clients", so I went to work and this is what I created.  I stood back and gazed upon what I'd created with a warm fuzzy feeling. It was beautiful! Then the time came for the team to use the design I'd created for another feature with a new entity. Let's just say that they didn't get the same warm fuzzy feeling that I did when they looked at the design. After much discussion they eventually got it through to me that I'd bloated the design based on an assumption of future functionality. After much more discussion we cut the design down to the following. This design gives us future flexibility with no extra work, it is as complex as it needs to be. It has been a couple of months since this incident and we still haven't needed to access either of the entities from other clients. Using the simpler design allowed us to do more stuff with less stuff!

    Read the article

  • Karmetasploit (aircrack-ng) Not consistantly Broadcasting AP ssid

    - by Sparky
    I cannot seem to get karmetasploit to broadcast my AP. Actually, taking it back a few steps I cannot get airbase-ng (v.r2154) to broadcast an SSID. I have seen it broadcast a few intermittent times (not many at all), but most of the time it doesn't show up at all. When it showed up the last time it came up as ad-hoc also. simplest comand I have tried: sudo airbase-ng -e "Wifi-test" -c 11 -v mon0 (I have tried with/without -c and -P -C 30) It appears to work just fine on the attacking machine, but nothing gets broadcasted. I have tried viewing from (3) different computers (winXP, Win7, ubuntu 12.04) Additionally, I am running Ubuntu 12.04 I have tried (3) different wireless cards Internal Card: Intel 4965 External USB: Ubiquiti Atheros carl9170 external SUB: ALFA AWUS036H Realtek RTL8187L I have tried putting each in/out of monitor mode (airmon-ng start monX) I have also tested to see if injection is working: sudo aireplay-ng -9 mon0 sudo aireplay-ng -9 mon0 22:37:54 Trying broadcast probe requests... 22:37:55 Injection is working! 22:37:56 Found 4 APs ... ... Has anyone experienced this issue and have advice/solution? I the aircrack-ng forum site has been down for some time, so I cannot get advice from that site. Thanks, Sparky

    Read the article

  • Visual Studio 2010 editor painfully slow

    - by Daniel Gehriger
    I'm running out of patience with MS VisualStudio 2010: I'm working on a solution containing ~50 C++ projects. When using the editor, I experience a lag of 1 - 2 seconds whenever I move the cursor to a different line, or when I move to a different window, or generally when the editor losses and gains focus. I went through a whole series of optimizations, to no avail: installed all hotfixes for VS2010 disabled all add-ins and extensions disabled Intellisense deleted all temporary files created by VS2010 disabled hardware acceleration unloaded all but 15 projects disabled tracking changes closed all but one window and so on. This is on a Dual Core machine with SSD harddrive (verified throughput 100MB/s), enough free space on HD, Windows 7 Pro 32-bit with 3GB of RAM and most of it still free. Whenever I type a letter, CPU usage of devenv.exe goes to 50 - 90% in process monitor for 1 - 2 seconds before returning to 5%. I used Process Explorer to analyze registry and file system access, and I only notice frequent accesses to the .sln file (which is quiet small), and a few registry reads, but nothing that would raise a red flag. I don't have this problem with solutions containing less projects, so I'm inclined to think that it's related to the number of projects. For your information, the entire solution has been migrated over the years from VS2005 to VS2008 to now VS2010. Does anyone have any ideas what else I could do to resume work on this project, other than returning to VS2008?

    Read the article

  • mdadm: brakes boot due to "is not ready yet or not present" error

    - by BarsMonster
    This is so damn frustrating :-| I've spent like 20 hours on this nice error, and seems like dozens of people over Internet too, and no clear solution yet. I have non-system RAID-5 of 5 disks, and it's fine. But during boot up it says that "/dev/md0 is not ready yet or not present" and asks to press 'S'. Very nice for Ubuntu Server - I have to bring monitor and keyboard to go next. After this system boots and it's all fine. md0 device works, /proc/mdstat is fine. When I do mount -a - it mounts this array without errors and works fine. As a dumb and shameful workaround I added noauto in /etc/fstab, and did mounting in /etc/rc.local - it works fine then. Any hints how to make it work properly? fstab: UUID=3588dfed-47ae-4c32-9855-2d69df713b86 /var/bigfatdisk ext4 noauto,noatime,data=writeback,barrier=0,nobh,commit=5 0 0 mdadm config: It is autogenerated: # mdadm.conf # # Please refer to mdadm.conf(5) for information about this file. # # by default, scan all partitions (/proc/partitions) for MD superblocks. # alternatively, specify devices to scan, using wildcards if desired. DEVICE partitions # auto-create devices with Debian standard permissions CREATE owner=root group=disk mode=0660 auto=yes # automatically tag new arrays as belonging to the local system HOMEHOST <system> # instruct the monitoring daemon where to send mail alerts MAILADDR CENSORED # definitions of existing MD arrays ARRAY /dev/md/0 metadata=1.2 bitmap=/var/md0_intent UUID=efccbeb6:a0a65cd6:470dcdf3:62781188 name=LBox2:0 # This file was auto-generated on Mon, 10 Jan 2011 04:06:55 +0200 # by mkconf 3.1.2-2

    Read the article

  • Can't complete dropbox installation from behind proxy in Ubuntu 11.10

    - by Mark Jones
    Problem: My PC on campus sits behind a proxy (requiring authentication) and I can't setup Dropbox. I am convinced that this is a proxy issue as I can't setup Ubuntu one either (but I don't use Ubuntu One so that is not a problem). I have looked at the Ubuntu One fix but it seems to be to modify settings explicitly related to Ubuntu One. I can install the nautilus-dropbox package (compiled from source and from .deb package from website and from software centre) but once I click OK from the "Dropbox Installation" dialog box (prompting me to download the proprietary daemon) the installation just freezes with the OK button pressed. When I look at its process in System Monitor its waiting channel is inet_wait_for_connect. I have set the following proxy directives thus far: Added mj22:**@proxy.waikato.ac.nz:80 information to network proxy settings under network in settings. Added http_host and http_port variables under gconf-editor-system-proxy Added 'host', 'authentication_password' 'authentication_user' and ticked 'user authentication' and 'use_http_proxy' under gconf-editor-system-http_proxy Added export http_proxy="http://mj22:**@proxy.waikato.ac.nz:80/" to /etc/bash.bashrc Added Acquire::http::proxy "http://mj22:**@proxy.waikato.ac.nz:80/"; to /etc/apt/apt.conf (which is what I imagine is letting Software Center retrieve packages). (where ** is my password) I have also added the equivalent ftp and https lines for the above entries. I get the internet fine and Software Centre can download packages but thats it. Related issues: The software centre can't fetch reviews (but can download packages). When trying to add an online account in Gnome 3 a dialog pop up appears with "Error getting a Request Token: Cannot connect to proxy (proxy.waikato.ac.nz)" Updates: After some time (10mins ish) Dropbox shows an error dialog box that reads: Trouble connecting to Dropbox servers. Maybe your internet connection is down, or you need to set you http_proxy environment variable. Is there a way I can see what environment variables are currently set?

    Read the article

  • Design pattern for an automated mechanical test bench

    - by JJS
    Background I have a test fixture with a number of communication/data acquisition devices on it that is used as an end of line test for a product. Because of all the various sensors used in the bench and the need to run the test procedure in near real-time, I'm having a hard time structuring the program to be more friendly to modify later on. For example, a National Instruments USB data acquisition device is used to control an analog output (load) and monitor an analog input (current), a digital scale with a serial data interface measures position, an air pressure gauge with a different serial data interface, and the product is interfaced through a proprietary DLL that handles its own serial communication. The hard part The "real-time" aspect of the program is my biggest tripping point. For example, I need to time how long the product needs to go from position 0 to position 10,000 to the tenth of a second. While it's traveling, I need to ramp up an output of the NI DAQ when it reaches position 6,000 and ramp it down when it reaches position 8,000. This sort of control looks easy from browsing NI's LabVIEW docs but I'm stuck with C# for now. All external communication is done by polling which makes for lots of annoying loops. I've slapped together a loose Producer Consumer model where the Producer thread loops through reading the sensors and sets the outputs. The Consumer thread executes functions containing timed loops that poll the Producer for current data and execute movement commands as required. The UI thread polls both threads for updating some gauges indicating current test progress. Unsure where to start Is there a more appropriate pattern for this type of application? Are there any good resources for writing control loops in software (non-LabVIEW) that interface with external sensors and whatnot?

    Read the article

  • Have you used nDepend?

    - by Nick Harrison
    Have you Used NDepend? I have often wanted to use it, but never spent the money on it.   I have developed many tools that try to do pieces of what NDepend does, but never with as much success as they reach. Put simply, it is a tool that will allow you to udnerstand and monitor the architecture of your software, and it does it in some pretty amazing ways. One of the most impressive features is something that they call Code Query Language.   It allows you to write queries very similar to SQL to track the performance of various software metrics and use this to identify areas that are out of compliance with your standards and architecture. For instance, once you have analyzed your project, you can write queries such as : SELECT METHODS WHERE IsPublic AND CouldBePrivate  You can also set up such queries to provide warnings if there are records returned.    You can incorporae this into your daily build and compare build against build. There are over 82 metrics included to allow you to view your code in a variety of angles. I have often advocated for a "Code Inventory" database to track the state of software and the ROI on software investments.    This tool alone will take you about 90% of the way there. If you are not using it yet,  I strongly recommend that you do!

    Read the article

  • Mouse doesn't work & internet connection not made in Ubuntu 12.04 LTS

    - by David Skare
    Yesterday, Nov 15, 2012, I booted into my Ubuntu 12.04 LTS system. It has resided on a Crucial 128 GB SSD with about 90% free space since early summer. I also have Windows 7 loaded on another Crucial 256 GB SSD. Ubuntu has set up a dual boot system for me even though each OS has its own SSD. I have been using this setup without problems since summer. Yesterday, when the boot process finished, my Microsoft Comfort Mouse 3000 did not work and there was a message that Ubuntu was not connected to the internet. So w/o the mouse I was forced to turn the machine off manually. About 4 days ago Ubuntu worked fine and booting into Win 7 also works fine. I have a backup machine with the same style mouse on it so I swapped the mouse onto this system. Same results. But both mice work when booting into Win 7. Today I removed both SSDs and installed my Ubuntu 12.04 HD which has not been used since I moved Ubuntu to the SSD from it. Same results. Between the last time I used Ubuntu 12.04 on the SSD and when I tried to use it again I made no changes to my machine, either hardware or software. My machines specs are: AMD FX-6100, MSI 990FXA-GD65 AM3+ format with latest BIOS (Ver 19.9), Corsair Vengeance 1866 MHz memory - 16 GB (4GB X 4 sticks), MSI N580GTX video card (nVidia 306.97 drivers), Sony Bravia 32" HD TV as a monitor, Pioneer BluRay DVD-RW, DSL connection to internet thru a router (10 mps), Crucial 128 GB SSD (90% free space), Microsoft Comfort Mouse 3000 I try to maintain current BIOS and drivers for all devices. I mostly use my Ubuntu system for programming in GCC and OpenCOBOL, surfing the internet and e-mailing. No games are installed. I'm stumped! If anyone has experienced this same problem I'd appreciate knowing how you solved it. TIA, Dave

    Read the article

< Previous Page | 506 507 508 509 510 511 512 513 514 515 516 517  | Next Page >