Search Results

Search found 19881 results on 796 pages for 'log analysis'.

Page 515/796 | < Previous Page | 511 512 513 514 515 516 517 518 519 520 521 522  | Next Page >

  • Unexpected advantage of Engineered Systems

    - by user12244672
    It's not surprising that Engineered Systems accelerate the debugging and resolution of customer issues. But what has surprised me is just how much faster issue resolution is with Engineered Systems such as SPARC SuperCluster. These are powerful, complex, systems used by customers wanting extreme database performance, app performance, and cost saving server consolidation. A SPARC SuperCluster consists or 2 or 4 powerful T4-4 compute nodes, 3 or 6 extreme performance Exadata Storage Cells, a ZFS Storage Appliance 7320 for general purpose storage, and ultra fast Infiniband switches.  Each with its own firmware. It runs Solaris 11, Solaris 10, 11gR2, LDoms virtualization, and Zones virtualization on the T4-4 compute nodes, a modified version of Solaris 11 in the ZFS Storage Appliance, a modified and highly tuned version of Oracle Linux running Exadata software on the Storage Cells, another Linux derivative in the Infiniband switches, etc. It has an Infiniband data network between the components, a 10Gb data network to the outside world, and a 1Gb management network. And customers can run whatever middleware and apps they want on it, clustered in whatever way they want. In one word, powerful.  In another, complex. The system is highly Engineered.  But it's designed to run general purpose applications. That is, the physical components, configuration, cabling, virtualization technologies, switches, firmware, Operating System versions, network protocols, tunables, etc. are all preset for optimum performance and robustness. That improves the customer experience as what the customer runs leverages our technical know-how and best practices and is what we've tested intensely within Oracle. It should also make debugging easier by fixing a large number of variables which would otherwise be in play if a customer or Systems Integrator had assembled such a complex system themselves from the constituent components.  For example, there's myriad network protocols which could be used with Infiniband.  Myriad ways the components could be interconnected, myriad tunable settings, etc. But what has really surprised me - and I've been working in this area for 15 years now - is just how much easier and faster Engineered Systems have made debugging and issue resolution. All those error opportunities for sub-optimal cabling, unusual network protocols, sub-optimal deployment of virtualization technologies, issues with 3rd party storage, issues with 3rd party multi-pathing products, etc., are simply taken out of the equation. All those error opportunities for making an issue unique to a particular set-up, the "why aren't we seeing this on any other system ?" type questions, the doubts, just go away when we or a customer discover an issue on an Engineered System. It enables a really honed response, getting to the root cause much, much faster than would otherwise be the case. Here's a couple of examples from the last month, one found in-house by my team, one found by a customer: Example 1: We found a node eviction issue running 11gR2 with Solaris 11 SRU 12 under extreme load on what we call our ExaLego test system (mimics an Exadata / SuperCluster 11gR2 Exadata Storage Cell set-up).  We quickly established that an enhancement in SRU12 enabled an 11gR2 process to query Infiniband's Subnet Manager, replacing a fallback mechanism it had used previously.  Under abnormally heavy load, the query could return results which were misinterpreted resulting in node eviction.  In several daily joint debugging sessions between the Solaris, Infiniband, and 11gR2 teams, the issue was fully root caused, evaluated, and a fix agreed upon.  That fix went back into all Solaris releases the following Monday.  From initial issue discovery to the fix being put back into all Solaris releases was just 10 days. Example 2: A customer reported sporadic performance degradation.  The reasons were unclear and the information sparse.  The SPARC SuperCluster Engineered Systems support teams which comprises both SPARC/Solaris and Database/Exadata experts worked to root cause the issue.  A number of contributing factors were discovered, including tunable parameters.  An intense collaborative investigation between the engineering teams identified the root cause to a CPU bound networking thread which was being starved of CPU cycles under extreme load.  Workarounds were identified.  Modifications have been put back into 11gR2 to alleviate the issue and a development project already underway within Solaris has been sped up to provide the final resolution on the Solaris side.  The fixed SPARC SuperCluster configuration greatly aided issue reproduction and dramatically sped up root cause analysis, allowing the correct workarounds and fixes to be identified, prioritized, and implemented.  The customer is now extremely happy with performance and robustness.  Since the configuration is common to other customers, the lessons learned are being proactively rolled out to other customers and incorporated into the installation procedures for future customers.  This effectively acts as a turbo-boost to performance and reliability for all SPARC SuperCluster customers.  If this had occurred in a "home grown" system of this complexity, I expect it would have taken at least 6 months to get to the bottom of the issue.  But because it was an Engineered System, known, understood, and qualified by both the Solaris and Database teams, we were able to collaborate closely to identify cause and effect and expedite a solution for the customer.  That is a key advantage of Engineered Systems which should not be underestimated.  Indeed, the initial issue mitigation on the Database side followed by final fix on the Solaris side, highlights the high degree of collaboration and excellent teamwork between the Oracle engineering teams.  It's a compelling advantage of the integrated Oracle Red Stack in general and Engineered Systems in particular.

    Read the article

  • NVIDIA proprietary driver logging me to console instead of GUI

    - by Woozie
    Firstly i want to apologise for any mistakes, English is not my native language. My problem is I can't get NVIDIA proprietary drivers to work. I tried to install it on Ubuntu 12.04.1 32 and 64 bits, Ubuntu 12.10 Beta 2, Linux Mint 13 Cinnamon 64 bits and openSUSE 12.2 64 bits and the error code and symptoms (logging to tty1 instead of GUI logging, low-res bootscreen) are the same for all of these distros. Right, I didn't tell what's the error code. It appears on sudo startx. NVIDIA: could not open the device file /dev/nvidia0 (Input/output error). I know that's the common problem, but I tried to blacklist or even remove the noveau drivers, install NVIDIA driver from repo/from official script/in "Additional drivers", editing xorg.conf and using Xorg -configurate and nvidia-xconfig, actualizing the kernel and entire distro and many, many things that I don't remember. But the problem is even better: entire Cinnamon (Mint) is freezing during the work. I found the error code, which appears during the freeze: Oct 1 20:57:17 WoozieLaptop kernel: [ 308.120176] [drm] nouveau 0000:01:00.0: PFIFO_CACHE_ERROR - Ch 4/1 Mthd 0z0060 Data 0xbcef0201 My Xorg.0.log is here. It was made on Ubuntu 12.04.1 after installing NVIDIA drivers (obviously). inxi -G from Mint: Graphics: Card: NVIDIA GT216 [GeForce GT 240M] X.org: 1.11.3 drivers: (unloaded: nvidia) FAILED: nouveau,vesa,fbdev tty size: 80x25 Advanced Data: N/A for root out of X lspci -k | grep -A2 VGA from Mint: 01:00.0 VGA compatible controller: NVIDIA Corporation GT216 [GeForce GT 240M] (rev a2) Subsystem: Lenovo Device 38ff Kernel driver in use: nvidia My hardware is: Lenovo IdeaPad Y550 Intel C2D T6600 NVIDIA GeForce GT 240M 4 GB of RAM Any help will be appreciated. This problem totally disabled my laptop from daily using. Cheers, Woozie

    Read the article

  • ubuntu mass deployment kickstart file how/where?

    - by gkrawiec
    i've succesfully been able to prepare an OEM image that is ready to be cloned and installed in about 1100 machines. My only issue right now is that when the machine boots for the first time it asks for the basic setup questions. I think I have the kickstart file ready, but I dont know how to call it. My logic says that before I run the "prepare to ship to end user" script that I have to modify the boot parameters to call the ks file so the ks.cfg file goes with each drive. My issue is I cant figure out how to modify the boot parameters. Also, i dont know if there is a log i can check to see if its actually calling it or not. I am using ubuntu 12.04 desktop x64. I am trying on /etc/default/grub by modifying one line from GRUB_CMDLINE_LINUX_DEFAULT="quiet splash" to GRUB_CMDLINE_LINUX_DEFAULT="quiet splash ks=file:/ks.cfg" then I run an update-grub but its not working. My ks.cfg file is: ----------------------- #Generated by Kickstart Configurator #System language lang en_US #System keyboard keyboard us #System timezone timezone America/Tijuana Initial user user mytestuser --fullname "Test User" --iscrypted --password $sdfsfsdgthrttyujtkyktru #Rebootafter installation reboot ------------------------- what am I doing wrong? thanks, -gk

    Read the article

  • western digital caviar black. EXT4-fs error

    - by azat
    Recently I update my HDD on desktop machine, and bought WD Caviar Black. But after I format & copy information to it (using dd), and fix partitions size: I have next errors in kern.log: Aug 27 16:04:35 home-spb kernel: [148265.326264] EXT4-fs error (device sdc2): ext4_mb_generate_buddy:739: group 9054, 32254 clusters in bitmap, 32258 in gd Aug 27 16:07:11 home-spb kernel: [148421.493483] EXT4-fs error (device sdc2): ext4_mb_generate_buddy:739: group 9045, 32254 clusters in bitmap, 32258 in gd Aug 27 16:09:17 home-spb kernel: [148546.481693] EXT4-fs error (device sdc2): ext4_mb_generate_buddy:739: group 10299, 32254 clusters in bitmap, 32258 in gd Aug 27 16:09:17 home-spb kernel: [148546.487147] JBD2: Spotted dirty metadata buffer (dev = sdc2, blocknr = 0). There's a risk of filesystem corruption in case of system crash. Aug 27 16:09:42 home-spb kernel: [148572.258711] EXT4-fs error (device sdc2): ext4_mb_generate_buddy:739: group 4345, 32254 clusters in bitmap, 32258 in gd Aug 27 16:09:42 home-spb kernel: [148572.277591] JBD2: Spotted dirty metadata buffer (dev = sdc2, blocknr = 0). There's a risk of filesystem corruption in case of system crash. Aug 27 16:09:42 home-spb kernel: [148572.278202] EXT4-fs error (device sdc2): ext4_mb_generate_buddy:739: group 4344, 32254 clusters in bitmap, 32258 in gd Aug 27 16:09:42 home-spb kernel: [148572.284760] JBD2: Spotted dirty metadata buffer (dev = sdc2, blocknr = 0). There's a risk of filesystem corruption in case of system crash. Aug 27 16:09:42 home-spb kernel: [148572.291983] EXT4-fs error (device sdc2): ext4_mb_generate_buddy:739: group 9051, 32254 clusters in bitmap, 32258 in gd Aug 27 16:09:42 home-spb kernel: [148572.297495] JBD2: Spotted dirty metadata buffer (dev = sdc2, blocknr = 0). There's a risk of filesystem corruption in case of system crash. Aug 27 16:09:42 home-spb kernel: [148572.297916] EXT4-fs error (device sdc2): ext4_mb_generate_buddy:739: group 9050, 32254 clusters in bitmap, 32258 in gd Aug 27 16:09:42 home-spb kernel: [148572.297940] JBD2: Spotted dirty metadata buffer (dev = sdc2, blocknr = 0). There's a risk of filesystem corruption in case of system crash. Aug 27 16:09:42 home-spb kernel: [148572.303213] EXT4-fs error (device sdc2): ext4_mb_generate_buddy:739: group 4425, 32254 clusters in bitmap, 32258 in gd Aug 27 16:09:42 home-spb kernel: [148572.312127] JBD2: Spotted dirty metadata buffer (dev = sdc2, blocknr = 0). There's a risk of filesystem corruption in case of system crash. Aug 27 16:09:42 home-spb kernel: [148572.312487] EXT4-fs error (device sdc2): ext4_mb_generate_buddy:739: group 4424, 32254 clusters in bitmap, 32258 in gd Aug 27 16:09:42 home-spb kernel: [148572.317858] JBD2: Spotted dirty metadata buffer (dev = sdc2, blocknr = 0). There's a risk of filesystem corruption in case of system crash. Aug 27 16:09:42 home-spb kernel: [148572.322231] EXT4-fs error (device sdc2): ext4_mb_generate_buddy:739: group 4336, 32254 clusters in bitmap, 32258 in gd Aug 27 16:09:42 home-spb kernel: [148572.326250] JBD2: Spotted dirty metadata buffer (dev = sdc2, blocknr = 0). There's a risk of filesystem corruption in case of system crash. Aug 27 16:09:42 home-spb kernel: [148572.326599] EXT4-fs error (device sdc2): ext4_mb_generate_buddy:739: group 4335, 32254 clusters in bitmap, 32258 in gd Aug 27 16:09:42 home-spb kernel: [148572.332397] JBD2: Spotted dirty metadata buffer (dev = sdc2, blocknr = 0). There's a risk of filesystem corruption in case of system crash. Aug 27 16:09:42 home-spb kernel: [148572.341957] EXT4-fs error (device sdc2): ext4_mb_generate_buddy:739: group 5764, 32254 clusters in bitmap, 32258 in gd Aug 27 16:09:42 home-spb kernel: [148572.350709] JBD2: Spotted dirty metadata buffer (dev = sdc2, blocknr = 0). There's a risk of filesystem corruption in case of system crash. Aug 27 16:09:42 home-spb kernel: [148572.351127] EXT4-fs error (device sdc2): ext4_mb_generate_buddy:739: group 5763, 32254 clusters in bitmap, 32258 in gd Aug 27 16:09:42 home-spb kernel: [148572.355916] JBD2: Spotted dirty metadata buffer (dev = sdc2, blocknr = 0). There's a risk of filesystem corruption in case of system crash. Aug 27 16:09:43 home-spb kernel: [148572.401055] EXT4-fs error (device sdc2): ext4_mb_generate_buddy:739: group 10063, 32254 clusters in bitmap, 32258 in gd Aug 27 16:09:43 home-spb kernel: [148572.404357] JBD2: Spotted dirty metadata buffer (dev = sdc2, blocknr = 0). There's a risk of filesystem corruption in case of system crash. Aug 27 16:09:43 home-spb kernel: [148572.414699] EXT4-fs error (device sdc2): ext4_mb_generate_buddy:739: group 10073, 32254 clusters in bitmap, 32258 in gd Aug 27 16:09:43 home-spb kernel: [148572.420411] JBD2: Spotted dirty metadata buffer (dev = sdc2, blocknr = 0). There's a risk of filesystem corruption in case of system crash. Aug 27 16:09:43 home-spb kernel: [148572.493933] EXT4-fs error (device sdc2): ext4_mb_generate_buddy:739: group 9059, 32254 clusters in bitmap, 32258 in gd Aug 27 16:09:43 home-spb kernel: [148572.493956] JBD2: Spotted dirty metadata buffer (dev = sdc2, blocknr = 0). There's a risk of filesystem corruption in case of system crash. One time, machine rebooted (not manually), when I turn it on, it runs fsck on /dev/sdc2 and fix some errors and some files are missing on /dev/sdc2 I'v check /dev/sdc2 for badblocks, it doesn't have it ( using e2fsck -c /dev/sdc2 ) Here is the output of fsck http://pastebin.com/D5LmLVBY What else I can do to understand what's wrong here? BTW for /dev/sdc1 no message like that, in kern.log Linux version: 3.3.0 Distributive: Debian wheezy

    Read the article

  • Can I save an Apache environment variable value with SetEnv?

    - by Nicholas Tolley Cottrell
    I am running Apache 2.2 with Tomcat 6 and have several layers of URL rewriting going on in both Apache with RewriteRule and in Tomcat. I want to pass through the original REQUEST_URI that Apache sees so that I can log it properly for "page not found" errors etc. In httpd.conf I have a line: SetEnv ORIG_URL %{REQUEST_URI} and in the mod_jk.conf, I have: JkEnvVar ORIG_URL Which i thought should make the value available via request.getAttribute("ORIG_URL") in Servlets. However, all that I see is "%{REQUEST_URI}", so I assume that SetEnv doesn't interpret the %{...} syntax. What is the right way to get the URL the user requested in Tomcat?

    Read the article

  • Click and Drag from Clickpad stops working after a while 12.04

    - by Jason O'Neil
    I've got a Samsung Notebook (NP-QX412-S01AU) with a touchpad / clickpad. I'm running 12.04 Precise. When I first log into my computer, the touchpad behaves exactly as expected and desired. The longer I stay logged in, it slowly degrades. I'll try describe it. There are 3 ways of "dragging" on this clickpad: (Physical) click and hold with one finger, and drag around while still holding it down. All with one finger. (Physical) Click and hold with one finger, then with another finger drag around to move cursor. Double tap (not a physical click) and on the second tap, hold and drag. I most naturally use option 1, but here's how it works: When I first turn on, options 1, 2 and 3 all work. After a while, only options 2 and 3 work. Later still, only option 3 works. Restarting X causes all 3 to work again. I've compared the output of "synclient" in each of the states, and there was no difference. Anybody know what to look at? Or at the very least, a command I can run to "restart" the mouse driver without restarting X?

    Read the article

  • Compiling C++ code with mingw under 12.04

    - by golemit
    I tried to setting up compiling of the C++ projects under my Ubuntu 12.04 by mingw with QT libraries. The idea was to get executable independent from variations of target Windows versions and development environments of my colleagues. It was successfully implemented under OpenSuse 12.2 with mingw32 and some additional libraries including mingw32-libqt4 and some others. Fine. However when trying to do the same under Ubuntu 12.04 with mingw-w64 including latest libraries QT-4.8.3 copied from Windows there were always errors. No luck. The typical errors in these attempts can be seen in attachments. The commands used: qmake -spec /path_to_my_conf/win32-x-g++ my_project.pro make Can someone give a hint of the problem source? I would appreciate a good advice. Serge some exctracts from LOG: ./.obj/moc_xlseditor.o:moc_xlseditor.cpp:(.rdata$_ZTV10GXlsEditor[vtable for GXlsEditor]+0xec): undefined reference to `QDialog::accept()' ./.obj/moc_xlseditor.o:moc_xlseditor.cpp:(.rdata$_ZTV10GXlsEditor[vtable for GXlsEditor]+0xf0): undefined reference to `QDialog::reject()' ./.obj/moc_xlseditor.o:moc_xlseditor.cpp:(.rdata$_ZTV10GXlsEditor[vtable for GXlsEditor]+0x104): undefined reference to `non-virtual thunk to QWidget::devType() const' ./.obj/moc_xlseditor.o:moc_xlseditor.cpp:(.rdata$_ZTV10GXlsEditor[vtable for GXlsEditor]+0x108): undefined reference to `non-virtual thunk to QWidget::paintEngine() const' ./.obj/moc_xlseditor.o:moc_xlseditor.cpp:(.rdata$_ZTV10GXlsEditor[vtable for GXlsEditor]+0x10c): undefined reference to `non-virtual thunk to QWidget::getDC() const' ./.obj/moc_xlseditor.o:moc_xlseditor.cpp:(.rdata$_ZTV10GXlsEditor[vtable for GXlsEditor]+0x110): undefined reference to `non-virtual thunk to QWidget::releaseDC(HDC__*) const' ./.obj/moc_xlseditor.o:moc_xlseditor.cpp:(.rdata$_ZTV10GXlsEditor[vtable for GXlsEditor]+0x114): undefined reference to `non-virtual thunk to QWidget::metric(QPaintDevice::PaintDeviceMetric) const' ./.obj/qrc_images.o:qrc_images.cpp:(.text+0x24): undefined reference to `__imp___Z21qRegisterResourceDataiPKhS0_S0_' ./.obj/qrc_images.o:qrc_images.cpp:(.text+0x64): undefined reference to `__imp___Z23qUnregisterResourceDataiPKhS0_S0_' collect2: ld returned 1 exit status

    Read the article

  • What is Stackify?

    - by Matt Watson
    You have developers, applications, and servers. Stackify makes sure that they are all working efficiently. Our mission is to give developers the integrated tools they need to better troubleshoot and monitor the applications they create and the servers that they run on. Traditional IT operations tools are designed for network and system administrators. Developers commonly spend 30% of their time working with IT Operations remediating application service problems. Developers currently lack tools to efficiently support the applications they create. Stackify delivers the application support functionality that developers need:View application deployment locations, versions, and historyBrowse files on servers to ensure proper deploymentsAccess configuration and log files on serversRemotely restart windows services, scheduled tasks, and web applicationsBasic server monitoring and alertsCollects all application exceptions to a centralized pointLog and report on custom applications eventsStackify is building an integrated DevOps solution delivered from the cloud designed to meet the needs of developers but also help unify the working relationship with IT operations teams and existing security roles. Our goal is to help unify the interaction between developers and IT operations. Stackify allows both teams to have visibility that they never had before  to solve complex application service issues easier and faster. Stackify’s CEO and CTO both have experience managing very large and high growth software development teams. That experience is driving our design in Stackify to deliver the integrated tools we always wished we had, the next generation of development operations tools.

    Read the article

  • Java application crashes my computer. How do I troubleshoot?

    - by Oded
    I am using NetBeans 4.1 for my university course (this is an older version, but is the required version for the course - I can't use a newer version). Whenever I use it for longer than several minutes, my computer crashes - it either reboots or I need to reset it. I have tried running with all startup items disabled (to rule out other applications interfering with the app), but it did no good. I have used Sysinternals procmon logging and the logs are corrupt - the only way I was able to get a good log was by enabling boot logging. However these are huge and I don't know what to look for. I am using Windows XP SP3, fully patched up and this is the only application that I have any kind of problem with. Can anyone suggest troubleshooting steps that will help me pinpoint the cause of these crashes and fix them?

    Read the article

  • Fresh install of Ubuntu 12.10 won't boot on Asus X101CH Eee PC

    - by Najdmie
    I did a fresh install of Ubuntu 12.10 in my Asus X101CH Eee PC, using a live usb which I made using startup disk creator, replacing Ubuntu 12.04. The installation ran smoothly, but when I boot, it goes to a purple screen for a second, then a lot of text like the following shows up in sequence: Starting crash report submission daemon [OK] Starting CPU interrupts balancing daemon [OK] Stopping save kernel messages [OK] _ And the cursor just keeps blinking for hours. I can't log in. Pressing Alt + F2 did not bring me to console mode. I thought it might be a partition problem so I formatted the whole disk, by creating a new partition table using gparted in Ubuntu 12.04 live USB. I noticed that I can't try Ubuntu using 12.10 live USB either; it just went to a blank screen when I hit the 'try ubuntu' button. But the same problem arose. I even changed the pen drive for the live USB a couple of times. I happened to know that the Intel Atom N2600 Cedar Trail CPU in my computer is not well supported in Linux, I managed to install its drivers in Ubuntu 12.04, although the computer went blank during the installation.

    Read the article

  • How to get Windows Server 2008 VM to use multiple cores

    - by David Fraser
    I have a Windows Server 2008 machine running in VirtualBox. On initial installation, only one processor was made available, but now I want to run it as a multiprocessor machine. I have made all four cores available in the VirtualBox settings (as well as enabling VT-x/AMD-V and Nested Paging), but Task Manager still only shows one CPU. However, the four CPU cores are visible in Device Manager under Processors. In the event log on startup, I can see the following relevant events: EventLog.6009 Microsoft (R) Windows (R) 6.00.6002 Service Pack 2 Multiprocessor Free Kernel-Processor-Power.4 Processor 0 exposes the following: 1 idle state(s), 0 performance state(s), 0 throttle state(s) Kernel-Processor-Power.4 Processor 255 exposes the following: 0 idle state(s), 0 performance state(s), 0 throttle state(s) Kernel-Processor-Power.4 Processor 255 exposes the following: 0 idle state(s), 0 performance state(s), 0 throttle state(s) Kernel-Processor-Power.4 Processor 255 exposes the following: 0 idle state(s), 0 performance state(s), 0 throttle state(s) How can I make this system actually boot up as a multiprocessor machine?

    Read the article

  • Altering policies in policy based management to look at even which happened only in last 24 hours

    - by Manjot
    Hi, I am using SQL server 2008 Standard edition. I am using Policy based management with policies which come with SQL server during installation. I want the policies to only look at events that happened in last 24 hours. For example for "Windows Event Log System Failure Error" policy if system restarted unexpectedly 5 days ago, i don't want to be alerted daily. Is there any way by which I can restrict a policy to look at events which happened in last 24 hours not older? Any help please? Thanks in advance.

    Read the article

  • I can't work locally unless connected to the internet - how to fix?

    - by Rodney
    Hi, In Firefox, when I am disconnected from the net, I want to work locally on my local IIS server (Win XP, Firefox 3.5.10). I do NOT have Work Offline checked but FF says that it cannot find my site (ie. the message from FF if you try to access an online site offline) This applies to any localhost URL. I tried 127.0.0.1 and checked my Host file - that does not work either. If I check Work Offline then it shows the Firefox message that it cannot be reached because I have Work Offline checked. Unchecking it does not help. Then - I load up Safari, copy and paste the URL into that browser and it connects to my development localhost site. It is not just browser caching as I can log in etc. So Firefox will not let me develop locally unless I am connected to the internet, which is a problem. Suggestions please?

    Read the article

  • I'm stuck on User Defined Session destop environment

    - by Dan
    I just installed Ubuntu for the first time dual boot so I get to choose Ubuntu or windows. I then changed the setting where is doesn't ask for my password when booting up. I then installed Edubuntu desktop package. I then hit system and logged out that way i could be at the loggin screen that also lets you select the desktop environment. Edubuntu was not there but User defined session was so i clicked that thinking that might be Edubuntu and logged in. Now im totally stuck. Only walpaper on the screen as i realize now that is normal for user defined session but there is no log out button to change desktop environments now and since I set it to not ask for password at boot up there is no option to change it at start up. If i hit ctrl+alt+del it only lets you shutdown, restart, suspend, or hybernate.... no logg out. I have hit every key on the keybourd hoping something will pop up. I thought this must be a simple noob mistake that there must be endless artiles about this so did searches on google and forums and was shocked to find nothing about this. My next step unless someone can help is to uninstall and reinstall.

    Read the article

  • Credentials work for SSMS but not (ODBC) LogParser script

    - by justSteve
    Via SSMS I'm able to connect and navigate the server/db in question. but trying to connect via a logparser script the same credentials fail. I'm trying to execute this from the same box on which the server's running. the username is owner/dbo of the db. The db has mixed mode authentication. [linebreaks for clarity] C:\TTS\tools\LogParserc:\tts\tools\logparser\logparser file:c:\tts\tools\logparser\errors2SQL.sql?source="C:\inetpub\logs\LogFiles\W3SVC8\u_ex100521.log" -i:IISW3C -o:SQL -createTable:ON -oConnString:"Driver={SQL Server Native Client 10.0};Server=servername\SQLEXPRESS;db=Tter;uid=logger2;pwd=foo" -stats:OFF Task aborted. Error connecting to ODBC Server SQL State: 28000 Native Error: 18456 Error Message: [Microsoft][SQL Server Native Client 10.0][SQL Server]Login failed for user 'logger2'. C:\TTS\tools\LogParser

    Read the article

  • Can't play stream from TorrentFlux server

    - by thegreyspot
    I am trying to stream a video from my TorrentFlux-b4rt server. I tried multiple media players, none work. Only VLC was able to produce an error message: input can't be opened: VLC is unable to open the MRL 'mms://..*.*:8080/'. Check the log for details. I have tried multiple computers on different networks and all have the same issue. I am using Windows 7 to play the videos, and the server is Torrentflux-b4rt 1.0-beta2 with ubuntu 9.10.

    Read the article

  • Can't use nvidia card/driver on optimus notebook

    - by Mr. Pixel
    I installed (once again) the latest official nvidia driver for my GT540m on Ubuntu 11.10. Even though everything seems OK with my xorg.conf file (I've manually added BusID "PCI:1:0:0", since lspci shows 01:00.0 for my GPU). The problem is, when I use the xorg.conf file generated by Xorg -configure, Xorg automatically loads the Intel GPU. So I removed everything that was not related to my nvidia card, basically leaving my xorg.conf with one screen and one device (with the nvidia driver and the above-mentioned BusID), and Xorg fails to start. The log says something like "Devices on GT540m [newline] none" And a few lines later, something like "NVIDIA(0) found a screen, but have no device for it". When I don't set the BusID, it doesn't seem to detect my card either. Thank you for any suggestion. PS: If possible, I'd like to avoid bumblebee or any similar "hybrid graphics" solution, last time I tried I ended up reinstalling Ubuntu. Edit: Allow me to clarify the problem. I have a notebook with a GT540m graphics card, and an integrated intel gpu. I want to use the graphics card with full hardware acceleration and its official driver, as I do under windows.

    Read the article

  • ubuntu input/output error

    - by rplevy
    I'm having a problem with Ubuntu that I'm finding hard to troubleshoot for reasons that will become clear: reboot -bash: /sbin/reboot: Input/output error dmesg -bash: /bin/dmesg: Input/output error ps -e ps: error while loading shared libraries: /lib/libproc-3.2.8.so: cannot read file data: Input/output error lsof -bash: /usr/bin/lsof: Input/output error fsck -bash: /sbin/fsck: Input/output error badblocks -bash: /sbin/badblocks: Input/output error So I can't see what is going on, and I can't remotely reboot. What can I do to get to the bottom of this? Interestingly: init 0 Segmentation fault I can cat /var/syslog but not /var/log/messages or several other important files. less and more don't work, neither do tail or head, etc.

    Read the article

  • subversion problem on mac os x

    - by Mohsin Jimmy
    This exists in my httpd.conf file: <Location /svn> DAV svn SVNParentPath /Users/iirp/Sites/svn Allow from all #AuthType Basic #AuthName "Subversion repository" #AuthUserFile /Users/iirp/Sites/svn-auth-file #Require valid-user </Location> This is working file When I change this to: <Location /svn> DAV svn SVNParentPath /Users/iirp/Sites/svn #Allow from all AuthType Basic AuthName "Subversion repository" AuthUserFile /Users/iirp/Sites/svn-auth-file Require valid-user </Location> and when I access my repository through URL, it gives me the authentication screen but after that screen my svn repository is not showing up correctly. to see message that it gives to me is: Internal Server Error The server encountered an internal error or misconfiguration and was unable to complete your request. Please contact the server administrator, [email protected] and inform them of the time the error occurred, and anything you might have done that may have caused the error. More information about this error may be available in the server error log.

    Read the article

  • Ubuntu 10.04: unable to login after fresh install

    - by Richard
    Hello All, I,ve just installed a fresh copy of Ubunutu 10.04, downloaded a couple of days ago. The installation seemed to go fine. However I can't log in: the login screen just seems to reset and asks me for my password again. It's not an authentication / incorrect password issue. If I stick in a wrong password, I get "Authentication failure". I've googled around, others report the same issue on the Ubuntu forums, but there doesn't seem to be a fix. Does anyone know of a work around or what the problem is? Have 9.10, I might end up just installing that instead. THanks

    Read the article

  • Setting up Shibboleth to secure part of a website

    - by HorusKol
    I've installed the Shibboleth module for apache on Ubuntu 10.04 using aptitude to install libapache2-mod-shib2 as per https://groups.google.com/group/shibboleth-users/browse_thread/thread/9fca3b2af04d5ca8?pli=1 and enabled the module (I have checked in /etc/apache2/mods-enabled) I then proceeded to secure a directory on the server by placing a .htaccess file with the following directives: AuthType shibboleth ShibRequestSetting requireSession 1 Require valid-user Now - I haven't set up an SSL host yet - and I also haven't set up the IdP - but I would expect that the server would block access to this directory - but I'm getting the content without any problems. I have restarted the apache service and I have no errors in the log files.

    Read the article

  • MPlayer does not work

    - by Soham Pal
    Using the xubuntu desktop, on Ubuntu Raring updated from Quantal. MPlayer never really worked. No video, no audio, nothing. I really can't be any more helpful, so here's the log: petey@home-pc:~$ mplayer "/home/petey/Downloads/Polar Bear Cafe (480p)HorribleSubs]/[HorribleSubs] Polar Bear Cafe - 01 [480p].mkv" MPlayer SVN-r35984-4.7 (C) 2000-2013 MPlayer Team Playing /home/petey/Downloads/Polar Bear Cafe (480p)[HorribleSubs]/[HorribleSubs] Polar Bear Cafe - 01 [480p].mkv. libavformat version 55.0.100 (internal) libavformat file format detected. [lavf] stream 0: video (h264), -vid 0 [lavf] stream 1: audio (aac), -aid 0 [lavf] stream 2: subtitle (ass), -sid 0 VIDEO: [H264] 848x480 0bpp 23.810 fps 0.0 kbps ( 0.0 kbyte/s) Clip info: creation_time: 2012-04-05 21:36:10 Load subtitles in /home/petey/Downloads/Polar Bear Cafe (480p)[HorribleSubs]/ Can't open /dev/fb0: Permission denied [fbdev2] Can't open /dev/fb0: Permission denied VO: [v4l2] No such file or directory vo_cvidix: No vidix driver name provided, probing available ones (-v option for details)! [cyberblade] Error occurred during pci scan: Operation not permitted [mach64] Error occurred during pci scan: Operation not permitted [mga] Error occurred during pci scan: Operation not permitted [mga] Error occurred during pci scan: Operation not permitted [nvidia_vid] Error occurred during pci scan: Operation not permitted [pm3] Error occurred during pci scan: Operation not permitted [radeon] Error occurred during pci scan: Operation not permitted [rage128] Error occurred during pci scan: Operation not permitted [s3_vid] Error occurred during pci scan: Operation not permitted [SiS] Error occurred during pci scan: Operation not permitted [unichrome] Error occurred during pci scan: Operation not permitted [VO_SUB_VIDIX] Couldn't find working VIDIX driver. ========================================================================== Opening video decoder: [ffmpeg] FFmpeg's libavcodec codec family libavcodec version 55.0.100 (internal) Selected video codec: [ffh264] vfm: ffmpeg (FFmpeg H.264) ========================================================================== ========================================================================== Opening audio decoder: [ffmpeg] FFmpeg/libavcodec audio decoders AUDIO: 44100 Hz, 2 ch, floatle, 0.0 kbit/0.00% (ratio: 0->352800) Selected audio codec: [ffaac] afm: ffmpeg (FFmpeg AAC (MPEG-2/MPEG-4 Audio)) ========================================================================== [AO OSS] audio_setup: Can't open audio device /dev/dsp: No such file or directory DVB card number must be between 1 and 4 AO: [null] 44100Hz 2ch floatle (4 bytes per sample) Starting playback... Movie-Aspect is 1.78:1 - prescaling to correct movie aspect. VO: [null] 848x480 = 854x480 Planar YV12 A: 4.7 V: 4.7 A-V: 0.002 ct: 0.083 0/ 0 22% 0% 0.5% 0 0 MPlayer interrupted by signal 2 in module: sleep_timer A: 4.7 V: 4.7 A-V: 0.001 ct: 0.083 0/ 0 21% 0% 0.5% 0 0 Exiting... (Quit)

    Read the article

  • Site Web Analytics not updating Sharepoint 2010

    - by Rohit Gupta
    If you facing the issue that the web Analytics Reports in SharePoint 2010 Central Administration is not updating data. When you go to your site > site settings > Site Web Analytics reports or Site Collection Analytics reports  You get old data as in the ribbon displayed "Data Last Updated: 12/13/2010 2:00:20 AM" Please insure that the following things are covered: Insure that Usage and Data Health Data Collection service is configured correctly. Log Collection Schedule is configured correctly Microsoft Sharepoint Foundation Usage Data Import and Microsoft SharePoint Foundation Usage Data Processing Timer jobs are configured to run at regular intervals One last important Timer job is the Web Analytics Trigger Workflows Timer Job insure that this timer job is enabled and scheduled to run at regular intervals (for each site that you need analytics for). After you have insured that the web analytics service configuration is working fine and the Usage Data Import job is importing the *.usage files from the ULS LOGS folder into the WSS_Logging database, and that all the required timer jobs are running as expected… wait for a day for the report to get updated… the report gets updated automatically at 2:00 am in the morning… and i could not find a way to control the schedule for this report update job. So be sure to wait for a day before giving up :)

    Read the article

  • Windows scheduler - Tasks not running when user not logged in

    - by Glinkot
    I have Windows Server 2003, with schedules setup via Remote Desktop under one account. That account appears in the 'creator' column too. I have 'Run only if logged on' unticked. When I have logged in under that account and then 'disconnected' leaving the the session alive, the schedule runs. But every time the server is rebooted, the task again fails to run until I again log in and disconnect. Any KB fixes I've missed or issues I've overlooked? Normally I only discover the issue when a user tells me the schedule has stopped running so it's a real reliability issue. I'd also be happy with an answer suggesting an alternative scheduler with higher reliability. Thanks

    Read the article

  • mystery Internet traffic to port 445

    - by Ben Collver
    Recently, I noticed traffic from the office network to TCP port 445 on the Internet [a]. Below are the Linux firewall log entries to Facebook's network [b] and Google's network [c]. I would like to identify the source of this traffic. My first guess is that Facebook and Google might be using multiple TCP ports for SSL load balancing. However, I could not confirm this based on the web proxy logs. What else might it be? [a] http://support.microsoft.com/kb/204279 [b] Sep 4 08:30:03 firewall01 kernel: IN=eth0 OUT=eth2 SRC=10.0.0.131 DST=69.171.237.34 LEN=52 TOS=0x00 PREC=0x00 TTL=127 ID=14287 DF PROTO=TCP SPT=51711 DPT=445 WINDOW=8192 RES=0x00 SYN URGP=0 [c] Aug 28 06:02:41 firewall01 kernel: IN=eth0 OUT=eth2 SRC=10.0.0.115 DST=173.194.33.47 LEN=52 TOS=0x00 PREC=0x00 TTL=127 ID=4558 DF PROTO=TCP SPT=49294 DPT=445 WINDOW=8192 RES=0x00 SYN URGP=0

    Read the article

< Previous Page | 511 512 513 514 515 516 517 518 519 520 521 522  | Next Page >