Search Results

Search found 12365 results on 495 pages for 'core audio'.

Page 357/495 | < Previous Page | 353 354 355 356 357 358 359 360 361 362 363 364  | Next Page >

  • Ubuntu 13.10, kernel 3.11 blank screen issue with hybrid graphics

    - by Lagerbaer
    On my HP Envy, which has both an Intel on-chip graphics card and an Nvidia Geforce: *-display UNCLAIMED description: 3D controller product: GK208M [GeForce GT 740M] vendor: NVIDIA Corporation physical id: 0 bus info: pci@0000:01:00.0 version: a1 width: 64 bits clock: 33MHz capabilities: cap_list configuration: latency=0 resources: memory:d2000000-d2ffffff memory:a0000000-afffffff memory:b0000000-b1ffffff ioport:5000(size=128) memory:b2000000-b207ffff *-display description: VGA compatible controller product: 4th Gen Core Processor Integrated Graphics Controller vendor: Intel Corporation physical id: 2 bus info: pci@0000:00:02.0 version: 06 width: 64 bits clock: 33MHz capabilities: vga_controller bus_master cap_list rom configuration: driver=i915 latency=0 resources: irq:46 memory:d3000000-d33fffff memory:c0000000-cfffffff ioport:6000(size=64) I have trouble with all newer kernels. I basically had to install 12.04 LTS and use their 3.5 kernel family to get the system to boot. The 3.8 from 12.10 or the newest 3.11 from Ubuntu 13.10 leave me with a black screen upon boot. On one occasion I did hear the "log in" sound, but the screen did not display anything. I have purged all nvidia drivers so I guess it should just use the intel drivers, but apparently this is all messed up with newer kernel versions. This is different from the other "nvidia boots into blank screen" bug in that I don't rely solely on an nvidia card. Surely the intel on-chip card should be supported and leave me with something different from a blank screen? Again, it only works with kernel versions 3.5.0-41-generic, not with the 3.11.0-12 one that ships with Ubuntu 13.10. When I go into the grub menu and change the boot options from 'quiet splash' to 'nomodeset' I am able to boot the system, but then I don't get any graphics and trying 'sudo service lightdm start' doesn't succeed (I get 100% CPU for apport, but this doesn't do anything either, so I kill it). Help, I'm all out of ideas. EDIT: Let me add that I'm using the EFI boot system and have a dual-boot installation with Windows 8.

    Read the article

  • help with migrating from Widows, x64 FGLRX, CPU load, Java and Minecraft

    - by joxer
    Im new to ubuntu, it is the second time i have installed it. This comp is Dell studio 1558. some specs: CPU- intel core i7 Q720 1.6GHz, GPU- ATI Mobility Radeon HD 5400 FGLRX- i've fallowed these instructions among inspecting many others, i have tried all of the variants mentioned in that tread before reverting back to the drivers supplied with Ubuntu ( through additional drivers ) which apparently seem to work best. i am testing them with minecraft as silly as it may sound. in 2 to 60 minutes the FPS drop from 70+ to somewhere between 0 and 5. while "fgl_glxgears" runs at between 400 and 800 FPS smoothly.. I am using oracle ( sun ) JRE6 to run minecraft, i have gotten it through a tutorial linked on oracle's website, i currently have no other version of java installed ( was worse when i had a few others here ). after closing the game Ubuntu is similarly slow, i've checked the CPU load using System Monitor and it shows one of the CPU's jumping to 80%~100% load at a time.. a reboot solves it. i realize my mess is up to me to solve but a hand is always appreciated. tyvm in advance.

    Read the article

  • Nvidia GeForce Gt-520M-cn on intel dh61ww Ubuntu 12.04

    - by j goseeped
    hi people i hope you can help a little bit , i appreciate your time look: i have a this desktop i7 2600, 8gb ram ddr3, board intel dh61ww, Geforce Nvidia GT520-cn 2Gb ddr3, i just install ubuntu 64bits 12.04 kernel 3.2.0-23-generic , i want to setup two monitors samsung led 22" and get start mi video card 1) i download and installed nvidia driver 295.59 and also try with 302.17 to apt-update and upgrade, apt-get install build-essential linux-headers-$(uname -r), apt-get remove --purge nvidia*, apt-get remove --purge xserver-xorg-video-nouveau, vim /etc/modprobe.d/blacklist.conf blacklist vga16fb blacklist nouveau blacklist rivafb blacklist nvidiafb blacklist rivatv sh NVIDIA.run, sudo service lightdm start, reboot, nvidia-xorgconf 2)after reboot i get 800x600 and nvidia-settings say this. You do not appear to be using the NVIDIA X driver. Please edit your X configuration file (just run nvidia-xconfig as root), and restart the X server. 3) i change a little bit xorg.conf to set up a resolution to work property 4) i dont have any image in the monito and i dont have any option on Nvidia X server settings lspci | grep VGA 00:02.0 VGA compatible controller: Intel Corporation 2nd Generation Core Processor Family Integrated Graphics Controller (rev 09) 01:00.0 VGA compatible controller: NVIDIA Corporation GF119 [GeForce GT 520] (rev a1) egrep -i 'glx|nvidia' /var/log/Xorg.0.log [ 12.005] (II) LoadModule: "glx" [ 12.005] (II) Loading /usr/lib/xorg/modules/extensions/libglx.so [ 12.575] (II) Module glx: vendor="NVIDIA Corporation" [ 12.585] (II) NVIDIA GLX Module 302.17 Tue Jun 12 16:22:45 PDT 2012 [ 12.585] (II) Loading extension GLX [ 13.037] (EE) Failed to initialize GLX extension (Compatible NVIDIA X driver not found) [ 13.044] (II) config/udev: Adding input device HDA NVidia HDMI/DP,pcm=3 (/dev/input/event10) [ 13.044] (II) config/udev: Adding input device HDA NVidia HDMI/DP,pcm=7 (/dev/input/event9) glxinfo | grep direct Xlib: extension "GLX" missing on display ":0.0". Error: couldn't find RGB GLX visual or fbconfig sorry my english is no very well. and thanks guys

    Read the article

  • What do I need to learn to decide on rename/recompile source package names because of company rebranding?

    - by Roberto Linares
    My company is currently at a rebranding process and the brand names have been used in the sources' package names but these names are only visible to developers who maintain this code so nobody from project management is really interested in changing them considering also that it would imply the recompiling of several old components. What factors do I need to consider when deciding on a change like that? I don't know if I should worry about legal issues or not and if so, how to address this with project management. More background details. I have all the sources and dependencies but since the company rebranding, other development areas have adopted some of the code that needs package name-changing so I cannot take the decision only by myself so I don't make everyone else's code to crash with my core components and I cannot change other areas' code without the permission of those areas' users so yes, my concern is more political than technical. I am going try to coordinate the involved it areas to make the change anyway, since it seems to be the best approach.   Unfortunatelly in my company there's no continuous integration build server so we build our code manually on demand and to get something to production I have to justify the change (even just the package name changing) to QA with an user requirement and some other bureaucratic documentation so that's why I was hesitating the change in first place.

    Read the article

  • The latest version of the EJB 3.2 spec available on java.net project

    - by Marina Vatkina
    If you are not following us on the users alias, here is a quick update. Just before JavaOne, I uploaded the latest version of the EJB 3.2 Core document to the ejb-spec.java.net downloads. If you want to see the detailed changes, download it If you are interested in the high-level list, or would like to know what to look for, this is the list of changes since the previous version (found on the same download page): Specified that the SessionContext object in a the singleton session bean is thread-safe Clarified that the EJB timers distribution and failover rules apply only to persistent timers Clarified that non-persistent timers returned by getTimers and getAllTimers methods are from the same JVM as the caller Fixed section numbering (left over after moving it to its own chapter) in Ch 17 Noted that only 3.0 and 3.1 deployment descriptors are required to be supported in EJB 3.2 Lite for prior versions of the applications Fixes for EJB_SPEC-61 (Ambiguity in EJB lite local view support) and EJB_SPEC-59 (Improve references to the component-defining annotations) JMS/MDB changes: added new standard activation properties and the unique identifier, and rearranged sections for easier navigation Fixed unresolved cross-refs Updated the rule: only local asynchronous session bean invocations are supported in EJB 3.2 Lite Synchronized permissions in the Table with the permissions listed for the EJB Components in the Java EE Platform Specification Table EE.6-2 Specified that during processing of the close() method, the embeddable container cancels all pending asynchronous invocations and non-persistent timers Updated most of the referenced documents to their latest versions Happy reading!

    Read the article

  • Displaying Datamatrix in application error screen

    - by DaveNay
    Quite often we will get a report from a user in the field saying there was an error in our application. Frequently this leads to the typical round of "What was the error?" "I don't know, it was just an error." We of course log these faults to the log files, and we can even enable detailed debug logs, but this involves the end user changing a setting in the configuration file and then finding the correct files and then emailing them to us. As I'm sure you can all imagine, there are plenty of pitfalls and alligators in this methodology. Recently a couple of people have used their cell phone to email me a "screen capture" of the fault, and while this helps, we still have to scrutinize the image to find the exact fault, and if enabled, the stack trace. So this evening, I had the brilliant idea (IMHO) to encode the fault into a Datamatrix barcode image and then encourage users to send me a picture from their cell phone. I can then decode the datamatrix and get a parse-able error message! Our core technology is machine vision, so the decoding of the datamatrix image would be trivial, I just need to find a method of generating the actual image to display in the fault handler. Thoughts?

    Read the article

  • How can I extend the desktop onto an external monitor/projector?

    - by hellocatfood
    I've plugged in a projector into my laptop and I'm attempting to extend the desktop onto it (so that I can run a full screen app on the projector and have the controls on my laptop). I'm able to mirror the screens effectively (it does this by default) but I can't extend it. When I untick "Mirror screens" and press apply it asks me to log out and then back in again but it goes back to mirroring the screens. I'm able to extend desktop on to my external monitor at home, just not this projector. Is there a manual way or other way to do this other that through Monitors setting? My computer model is Dell Studio 1555: Pentium Dual Core T4300(2.1GHz,800MHz,1MB), 4096MB 800MHz DDR2 Dual Channel, 512 MB ATI Mobility RADEON HD 4570 using the ATI proprietary driver. My screen resolution is 1366x768 (16:9) The projector that it wont connect properly is a Hitachi CPX3. That page specifies that it's especially designed for projectors that use 16:10 aspect ratio, but considering my external monitor at home uses 4:3 should the differences in aspect ratio matter or be causing this error?

    Read the article

  • Using pkexec policy to run out of /opt/

    - by liberavia
    I still try to make it possible to run my app with root priveleges. Therefore I created two policies to run the application via pkexec (one for /usr/bin and one for /opt/extras... ) and added them to the setup.py: data_files=[('/usr/share/polkit-1/actions', ['data/com.ubuntu.pkexec.armorforge.policy']), ('/usr/share/polkit-1/actions', ['data/com.ubuntu.extras.pkexec.armorforge.policy']), ('/usr/bin/', ['data/armorforge-pkexec'])] ) additionally I added a startscript which uses pkexec for starting the application. It distinguishes between the two places and is used in the Exec-Statement of the desktopfile: #!/bin/sh if [ -f /opt/extras.ubuntu.com/armorforge/bin/armorforge ]; then pkexec "/opt/extras.ubuntu.com/armorforge/bin/armorforge" "$@" else pkexec `which armorforge` "$@" fi If I simply do a quickly package everything will work right. But if I package with extras option: quickly package --extras the Exec-statement will be exchanged. Even if I try to simulate the pkexec call via armorforge-pkexec It will aks for a password and then returns this: andre@andre-desktop:~/Entwicklung/Ubuntu/armorforge$ armorforge-pkexec (armorforge:10108): GLib-GIO-ERROR **: Settings schema 'org.gnome.desktop.interface' is not installed Trace/breakpoint trap (core dumped) So ok, I could not trick the opt-thing. How can I make sure, that my Application will run with root priveleges out of opt. I copied the way of using pkexec from synaptic. My application is for communicating with apparmor which currently has no dbus interface. Else I need to write into /etc/apparmor.d-folder. How should I deal with the opt-build which, as far as I understand, is required to submit my application to the ubuntu software center. Thanks for any hints and/or links :-)

    Read the article

  • JavaOne Technology Conference Is Coming to Russia

    - by Tori Wieldt
    JavaOne Russia 17-18 AprilRussian Academy of Sciences, MoscowRegister Now JavaOne and Oracle Develop 2012 Russia offers a wide variety of sessions, hands-on labs, keynotes, demos, and the opportunity to network with developer peers. If you’re looking for in-depth sessions on Java technologies and tools, this is the conference for you. Your registration also gets you into Oracle Develop sessions as well, so you can learn about application servers, cloud development and, of course, database development. The JavaOne Russia tracks are:Client-Side Technologies and Rich User ExperiencesLearn about developments in Java for the desktop and practices for building rich, immersive, and powerful user experiences across multiple hardware platforms and form factors. Core Java PlatformDiscover the latest innovations in Java virtual machines. Get deep technical explanations in security and networking and enhancements that allow dynamic programming languages to drive Java platform adoption. Java EE Web Profile, Platform Technologies, Web Services, and the Cloud Update your knowledge on topics such as Web application development, persistence, security, and transactions. This track will also address modularity, enterprise caching, Web sockets, and internet identity. Mobile, Java Card, Embedded, and DevicesThis track is devoted to Java technology as the ultimate platform for mobile computing. It also covers embedded and device usages of Java technologies, including Java SE, Java ME, Java Card, and JavaFX. Share this event: #oracleRU

    Read the article

  • How to enable a Web portal-based rich enterprise platform on different domains and hosts using JS without customization/ server configuration

    - by S.Jalali
    Our company Coscend has built a Web portal-based communications and cloud collaboration platform by using JavaScript (JS), which is embedded in HTML5 and formatted with CSS3. Other technologies used in the core code include Flash, Flex, PostgreSQL and MySQL. Our team would like to host this platform on five different Windows and Linux environments that run different types of Web servers such as IIS and Apache. Technical challenge: Each of these Windows and Linux servers have a different host name and domain name (and IP address), but we would like to keep our enterprise platform independent of host server configuration. Possible approach to solution: We think an API (interface module with a GUI) is needed to accomplish this level of modularity and flexibility while deploying at our enterprise customers. Seeking your insights: In this context, our team would appreciate your guidance on: Is there an algorithmic method to implement this Web portal-based platform in these Windows and Linux environments while separating it from server configuration, i.e., customizing the host name, domain name and IP address for each individual instance? For example, would it be suitable to create some JS variables / objects for host name and domain name and call them in the different implementations? If a reference to the host/domain names occurs on hundreds of portal modules, these variables or JS objects would replace that. If so, what is the best way to make these object modules written in JS portable and re-usable across different environments and instances for enterprise customers? Here is an example of the implemented code for the said platform. The following Web site (www.CoscendCommunications.com) was built using this enterprise collaboration platform and has the base code examples of the platform. This Web site is domain-specific. We like to make the underlying platform such that it is domain and host-independent. This will allow the underlying platform to be deployed in multiple instances of our enterprise customers.

    Read the article

  • Asus A8V overcurrent

    - by user139710
    This is not as much as a question as it is a note to those out there that upgrade their motherboards with better processors and the like. Here's my story. Recently I upgraded my processor. System specifications: • Asus A8V Deluxe • 4GB RAM • ATI Radeon 3870 AGP graphics card (I believe that's it) Anyway, I decided to put a dual core Opteron 180 in this rig, but the problem was that I needed to update the BIOS to V-1017, and not knowing the consequences, I went up to the Asus site and got the newest, the latest and the greatest, 1018.002 thinking that it was the best for this board, however it wasn't. I used the Asus EXFlash, which makes life a lot easier, flashed the BIOS and all of a sudden I start getting this message: USB overcurrent protection, system shutting down in 15 seconds to protect your system. WELL SHIT... This is a new one on me... I read the blogs, all the posts on this thing, and did all that everyone else did to correct the problem, but nothing helped. So i decided to start from square one, went back to Asus and looked at the BIOS download... OMG... IT WAS A BETA. So, I downloaded the update that was suggested 1017< and installed it and wouldn't you know, it took care of the problem, no more USB overcurrent protection, no more crashing. I write this today to let you all know about this, just in case you have an issue such as this. Well there you all go. Fly safe and eat your vegetables.

    Read the article

  • Package manager doesn't work anymore

    - by LukaD
    I'm using ubuntu 10.10 and recently my package manager has stopped working because of some problems with dependencies or something. I can't upgrade, install or uninstall anything at all. This is a huge problem. I couldn't find a solution to this with google so I'm asking here for help. This is what apt-get -f install outputs LANG=en_US.UTF-8 sudo apt-get install -f Reading package lists... Done Building dependency tree Reading state information... Done The following package was automatically installed and is no longer required: firefox-4.0-core Use 'apt-get autoremove' to remove them. 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 1 not fully installed or removed. After this operation, 0B of additional disk space will be used. Setting up openjdk-6-jre-headless (6b20-1.9.5-0ubuntu1) ... update-alternatives: error: alternative path /usr/lib/jvm/java-6-openjdk/jre/bin/java doesn't exist. dpkg: error processing openjdk-6-jre-headless (--configure): subprocess installed post-installation script returned error exit status 2 Errors were encountered while processing: openjdk-6-jre-headless E: Sub-process /usr/bin/dpkg returned an error code (1)

    Read the article

  • Ubuntu 12.04 LTX Install Problems (See post for system build details.)

    - by Lokitez
    This is my first ever attempt at working with Ubuntu. I have only ever installed Windows in the past and that may be the problem. I purchased all new hardware this week and I would really like to give Ubuntu a chance (especially since I don't want to buy another Windows license). First, the hardware: AMD FX-8150 Zambezi 3.6GHz Socket AM3+ 125W Eight-Core Desktop Processor ASUS Crosshair V Formula AM3+ AMD 990FX SATA 6Gb/s USB 3.0 ATX AMD Gaming Motherboard SAMSUNG 830 Series MZ-7PC128D/AM 2.5" 128GB SATA III MLC Internal Solid State Drive (SSD) - This is my intended boot drive. Western Digital VelociRaptor WD5000HHTZ 500GB 10000 RPM SATA 6.0Gb/s 3.5" Internal Hard Drive - This is a backup drive that I have installed Windows Vista on until I can get Ubuntu to work. G.SKILL Ripjaws X Series 16GB (2 x 8GB) 240-Pin DDR3 SDRAM DDR3 1600 (PC3 12800) ASUS HD7850-DC2-2GD5 Radeon HD 7850 2GB 256-bit GDDR5 PCI Express 3.0 x16 I have downloaded and tried to install both Ubuntu 64 bit and Kubuntu 64 bit (both 12.04). Both will always fail to copy a file during install or otherwise lockup during install to the SSD. I have burned two copies of the Ubuntu 12.04 and had the install fail with both. I have installed Vista onto the HDD. Is it possible to mount the Ubuntu file into

    Read the article

  • Preferred way for dealing with customer-defined data in enterprise application

    - by Axarydax
    Let's say that we have a small enterprise web (intranet) application for managing data for car dealers. It has screens for managing customers, inventory, orders, warranties and workshops. This application is installed at 10 customer sites for different car dealers. First version of this application was created without any way to provide for customer-specific data. For example, if dealer A wanted to be able to attach a photo to a customer, dealer B wanted to add e-mail contact to each workshop, and dealer C wanted to attach multiple PDF reports to a warranty, each and every feature like this was added to the application, so all of the customers received everything on new update. However, this will inevitably lead to conflicts as the number of customers grow as their usage patterns are unique, and if, for instance, a specific dealer requested to have an ability to attach (for some reason) a color of inventory item (and be able to search by this color) as a required item, others really wouldn't need this feature, and definitely will not want it to be a required item. Or, one dealer would like to manage e-mail contacts for their employees on a separate screen of the application. I imagine that a solution for this is to use a kind of plugin system, where we would have a core of the application that provides for standard features like customers, inventory, etc, and all of the customer's installed plugins. There would be different kinds of plugins - standalone screens like e-mail contacts for employees, with their own logic, and customer plugin which would extend or decorate inventory items (like photo or color). Inventory (customer,order,...) plugins would require to have installation procedure, hooks for plugging into the item editor, item displayer, item filtering for searching, backup hook and such. Is this the right way to solve this problem?

    Read the article

  • Quantify value for management

    - by nivlam
    We have two different legacy systems (window services in this case) that do exactly the same thing. Both of these systems have small differences for the different applications they serve. Both of these system's core functionality lies within a shared library. Most of the time, the updates occur in the shared library and we simply deploy the updated library to both of these systems. The systems themselves rarely change. Since both of these systems do essentially the same thing, our development team would like to consolidate these two systems into a single service. What can I do to convince management to allocate time for such a task? Some of the points I've noted are: Easier maintenance Decrease testing/QA time Unfortunately, this isn't enough. They would like us to provide them with hard numbers on the amount of hours this will save in the future and how this will speed up future development. Since most of the work is done in the shared library and the systems themselves never change, it's hard for us to quantify how many hours this will save. What kind of arguments can I make to justify the extra work to consolidate these systems?

    Read the article

  • EDQ Technical Enablement for OPN (Prague - June 17-19)

    - by milomir.vojvodic
    Oracle Enterprise Data Quality (EDQ) Technical Enablement and Partner Training Trusted Data for Your Enterprise Applications Oracle Enterprise Data Quality helps organizations achieve maximum value from their business-critical applications by delivering fit-for-purpose data. These products also enable individuals and collaborative teams to quickly and easily identify and resolve any problems in underlying data. With Oracle Enterprise Data Quality, customers can identify new opportunities, improve operational efficiency, and more efficiently comply with industry or governmental regulation. Oracle Enterprise Data Quality is designed to serve as a very channel friendly platform to OPN.  This means that pre-built extensions, components and even complete business solutions can readily be built and shared.  This allows our customers/partners to be highly efficient in how they deploy custom business solutions, but also allows our partners to develop specialized components, domain knowledge and even complete business solutions. Training is suitable for: · Database administrators · Architects · Technical staff Objectives of the training: After completing this course, participants should: · Have an understanding of the core functionality of EDQ across profiling, auditing, transforming, parsing and matching data · Be able to describe some of the key capabilities and benefits delivered by EDQ · Be able to create and run standalone EDQ processes and jobs · Be ready to start working with data from customers and (with practice) be able to demonstrate EDQ to customers Agenda 17th June Fundamentals For Demoing (Profile, Audit, Transform and More) Profiling Auditing Transforming Writing and exporting data Jobs and scheduling Publishing, packaging and copying EDQ processes Introduction to the Customer Data Extension Pack Realtime Processing via Web Services The Server Console Run Profiles Data Interfaces Sampling Publishing metrics to the Dashboard Users and security 18th June Matching Matching overview Basic matching configuration Matching rule hierarchies Clustering Merging Reviewing possible matches Outputting Match Data Case study 19th June Address Verification Address Verification Overview Configuration Accuracy Flags Parsing Parsing Overview Phrase profiling Tailoring a CDEP Parser Base Tokenization Classification Reclassification Selection Resolution Register Here Don’t miss this FREE event. Space is limited. Oracle University V Parku 2294/4 148 00 Praha 4 17.6. – 19.6. 2014 09:00 a.m.– 17:30 p.m.

    Read the article

  • Random compositing lag

    - by user1020567
    My laptop specs: 512 mb of RAM, out of which 64 mb are shared with an integrated GPU - ATI Radeon Xpress 200 M. Intel 1,6 Ghz Celeron M single-core processor. I've spent months trying to figure out why compositing and effects sometimes lag on any distro I try. Now I've come to realise that no matter what drivers I try (the default ones work for me on pretty much any linux) compositing lag is random. When I used Ubuntu 10.10, for example, sometimes window compositing would lag and sometimes it wouldn't. The PC is able to render those effects so hardware is not the problem. It's completely random and unpredictable - sometimes when I turn on the computer the effects lag horribly and sometimes it's completely smooth. I've also checked startup items and there doesn't seem to be any unnecessary entries. I also tried building my own OS with Arch Linux and the problem persists there, therefore I can only assume that it's a driver issue of some sort. By default there are lots of drivers supplied with linux distributions. Could it be that they're in the way? The ones that I need are ati/radeon (or both? What's the difference between them?) and there seem to be a lot of others... What should I do?

    Read the article

  • Database commands

    - by user12609425
    Ops Center has two database options - you can have Ops Center automatically install a database on the Enterprise Controller system, or you can use your own database on any system you choose. If you use your own database, it's obviously important to make sure that this database is running smoothly. You have a few tools that can help you do this. The first is the ecadm command. This command has a variety of subcommands that let you view and control the status of the Enterprise Controller. Two subcommands in particular are relevant to the database: ecadm verify-db: This subcommand verifies that the database is reachable and that the schemas are configured with the proper permissions. Use the -v option if you want more details; the command is normally terse if the DB is configured correctly. ecadm sqlplus -r: This subcommand opens an sqlplus console connection to the database. The -r option makes this console read-only, which isn't necessary, but is generally a good idea. You can also view the database contents using Oracle SQL Developer or other tools. The Accessing Core Product Data how-to describes this process.

    Read the article

  • Usb stick too slow to benchmark?

    - by user85340
    I have a Core 2 Duo [email protected] with 3GB RAM. After some time using XUbuntu 10.10 on an 8GB stick I decided to switch to 12.04 and put it onto a 32GB stick (Transcend). I use an EXT4 with no journalling, noatime etc set. /tmp and /run is using tmpfs. And it is REALLY slow. MUCH slower than the old Xubuntu on the 8GB stick. Starting takes minutes, all applications "fade" because they respond too slow. I first thought that the NVidia graphics card is responsible for this, because there seem to be some known problems with that. Doing the adjustment (uncheck the sync checkbox) did not help. I believe the root cause is that the access to the USB stick is extremely slow. Running the read benchmark of the disk utility then brought the message "disk is too slow to benchmark"! BUT: When I do the same benchmark with the live CD I get around 20MB read performance and have a very responsive system! So how can I find out what is going one here?

    Read the article

  • Will the Driver Support for Intel HD Graphics be Improved in 12.10?

    - by Hiranya
    I recently installed Ubuntu 12.04 on a HP Pavilion dv4 laptop. This is a core i7 machine with Intel HD graphics and also a separate nVidia VGA card. I had a lot of issues getting Ubuntu 12.04 working on this system. First there were issues booting up the live CD for installation. I worked around that by using the 'nomodeset' option. Then I continued to have similar issues after installation has completed. So I had to permanently add the nomodeset option to my GRUB boot configuration. At the moment I have a working installation but there are many issues: Ubuntu GUI is a bit flaky at times. The mouse pointer goes on and off when hovering over certain icons. Certain things doesn't get rendered properly on the screen. I can't access any of the tty consoles. Hitting Ctrl+Alt+F[1-6] gives me a blank screen. And once that happens I can't even come back to the UI by hitting Ctrl+Alt+F7. I've realized that tty consoles are actually working. I just can't see the text. If I enter a command like 'sudo reboot' into the empty screen the machine reboots. Can't get external displays (monitors, projectors etc) working. But I think this is probably because the VGA out is wired to the nVidia card which is not being used by Linux. colord program crashes every now and then triggering a popup message. So my main question is, will the support for Intel HD graphics be improved in the next release? Or will I have to keep using the nomodeset option in the new release too? Also I appreciate if anybody can shed some light on any of the issues listed above. Thanks in advance.

    Read the article

  • Totem crashes immediately after startup in 12.10

    - by Sakib Hasan
    I did a fresh install of Ubuntu 12.10 and did sudo apt-get update && sudo apt-get update. Then I installed ubuntu-restricted-extras, audacious and vlc from Software Center. After that I tried launch Totem Movie player but in terminal following error comes up: (totem:9295): Gdk-ERROR **: The program 'totem' received an X Window System error. This probably reflects a bug in the program. The error was 'BadDrawable (invalid Pixmap or Window parameter)'. (Details: serial 1808 error_code 9 request_code 152 minor_code 9) (Note to programmers: normally, X errors are reported asynchronously; that is, you will receive the error a while after causing it. To debug your program, run it with the GDK_SYNCHRONIZE environment variable to change this behavior. You can then get a meaningful backtrace from your debugger if you break on the gdk_x_error() function.) Trace/breakpoint trap (core dumped) I tried purge and again install. But the error remains. What should I do?

    Read the article

  • Best Practices PHP mvc routing

    - by dukeofweatherby
    I have a custom MVC framework that is in a constant state of evolution. There's a long standing debate with a co-worker how the routing should work. Considering the following directory structure: /core/Router.php /mvc/Controllers/{Public controllers} /mvc/Controllers/Private/{Controllers requiring valid user} /mvc/Controllers/CMS/{Controllers requiring valid user and specific roles} The question is: "Where should the current User's authentication be established: in the Router, when choosing which controller/directory to load, or in each Controller?" My argument is that when authenticating in the Router, an Error Controller is created instead of the requested Controller, informing you of your mishap; And the directory structure clearly indicates the authentication required. His argument is that a router should do routing and only routing. Leave it to the Controller to handle it on a case by case basis. This is more modular and allows more flexibility should changes need to be made by the router. PHP MVC - Custom Routing Mechanism alluded to it, but the topic was of a different nature. Alternative suggestions would be welcomed as well.

    Read the article

  • Bootable dvd installs ubuntu in one computer but not in other...Why? [closed]

    - by SAM
    Possible Duplicate: My computer boots to a black screen, what options do I have to fix it? I have 2 computers, Windows 7 Intel. On one computer Ubuntu boot-able DVD (AMD 64) works properly. But on other computer the same DVD boots OK but when clicked on "Install Ubuntu" a blank screen with blinking cursor(_) appears and it continues just blinking forever. What problem can be there in computer 2? Can it be DVD reader's problem? (Both computers have LG DVD RW) Can there be any problem in DVD? Computer 1 specs: Pentium D 3 GHz Windows 7 32-bit not a 64bit-capable processor still Ubuntu 64bit trial/installer runs... Computer 2 specs: Core i7 2700k Windows 7 32-bit nvidia gtx 560 graphicsCard ...BIG BOSS... still can't run the setup/trial/disk-check/memory-test ?!?!? Is it the problem of graphics card ?!? I also tried burning other dvd which has the same behavour.... AND yes the dvd name is: ubuntu-12.04.1-dvd-amd64.iso Any help is appreciated.

    Read the article

  • Performance Testing &ndash; Quick Reference Guide &ndash; Released up on CodePlex

    - by Shawn Cicoria
    Why performance test at all right?  Well, physics still plays a role in what we do.  Why not take a better look at your application – need help, well, the Rangers team just released the following to help: The following has both VS2008 & VS2010 content: http://vstt2008qrg.codeplex.com/ Visual Studio Performance Testing Quick Reference Guide (Version 2.0) The final released copy is here and ready for full time use. Please enjoy and post feedback on the discussion board. This document is a collection of items from public blog sites, Microsoft® internal discussion aliases (sanitized) and experiences from various Test Consultants in the Microsoft Services Labs. The idea is to provide quick reference points around various aspects of Microsoft Visual Studio® performance testing features that may not be covered in core documentation, or may not be easily understood. The different types of information cover: How does this feature work under the covers? How can I implement a workaround for this missing feature? This is a known bug and here is a fix or workaround. How do I troubleshoot issues I am having

    Read the article

  • How can I install from a 9.04 live USB/DVD?

    - by bstpierre
    I have a 9.04 (Jaunty) ISO burned to a USB stick; it appears to be a "live DVD". When I boot from it, I get a GRUB menu listing: Ubuntu, with Linux 2.6.35-generic (This matches the system currently installed on the HDD?) Ubuntu, with Linux 2.6.35-generic (recovery mode) Memory test Ubuntu 9.04, kernel 2.6.28-11-generic (on /dev/sda1) Ubuntu 9.04, kernel 2.6.28-11-generic (recovery mode) (on /dev/sda1) Ubuntu 9.04, memtest86+ (on /dev/sda1) When I select Ubuntu 9.04, kernel 2.6.28-11-generic (on /dev/sda1), I arrive at the desktop of a 9.04 system. I want to wipe the HDD clean and install 9.04. (Upgrading to something newer is not an option; this version is required by a legacy application.) How can I install from this live USB image? I vaguely remember some incantation that I should be able to use in the booted system, but my google-fu is broken at the moment. I'm comfortable with low-level commands, so if you want to recommend a more hard-core strategy, I'm willing to roll with it without requiring a ton of detail...

    Read the article

< Previous Page | 353 354 355 356 357 358 359 360 361 362 363 364  | Next Page >