Search Results

Search found 31269 results on 1251 pages for 'process management'.

Page 431/1251 | < Previous Page | 427 428 429 430 431 432 433 434 435 436 437 438  | Next Page >

  • John Hitchcock of Pace Describes the Oracle Agile PLM Customer Experience

    John Hitchcock, Senior Manager of Configuration Management at Pace (formerly 2Wire, Inc.), sat down for an interview during Oracle's Innovation Summit with Kerrie Foy, Manager of PLM Product Marketing at Oracle. Learn why his organization upgraded to the latest version of Agile and expanded the footprint to achieve impressive savings and productivity gains across the global, networked product value-chain.

    Read the article

  • asus n550jv audio problem: no sound from notebook' speakers

    - by skywalker
    Ubuntu 13.10. The problem is: the internal speakers don't work. I have no problem when I'm using the headphones. There is no hardware issue since in windows 8 everything works perfectly(external subwoofer included). I'm trying to modify /etc/modprobe.d/alsa-base.conf but I can't find the correct model to put into: options snd-hda-intel model= The file HD-Audio-Models.txt doesn't contain the model for ALC668. Some info: :~sudo aplay -l **** List of PLAYBACK Hardware Devices **** card 0: MID [HDA Intel MID], device 3: HDMI 0 [HDMI 0] Subdevices: 1/1 Subdevice #0: subdevice #0 card 0: MID [HDA Intel MID], device 7: HDMI 1 [HDMI 1] Subdevices: 1/1 Subdevice #0: subdevice #0 card 0: MID [HDA Intel MID], device 8: HDMI 2 [HDMI 2] Subdevices: 1/1 Subdevice #0: subdevice #0 card 1: PCH [HDA Intel PCH], device 0: ALC668 Analog [ALC668 Analog] Subdevices: 0/1 Subdevice #0: subdevice #0 :~$ sudo lspci -v | grep -A7 -i "audio" 00:03.0 Audio device: Intel Corporation Xeon E3-1200 v3/4th Gen Core Processor HD Audio Controller (rev 06) Subsystem: Intel Corporation Device 2010 Flags: bus master, fast devsel, latency 0, IRQ 52 Memory at f7a14000 (64-bit, non-prefetchable) [size=16K] Capabilities: [50] Power Management version 2 Capabilities: [60] MSI: Enable+ Count=1/1 Maskable- 64bit- Capabilities: [70] Express Root Complex Integrated Endpoint, MSI 00 Kernel driver in use: snd_hda_intel -- 00:1b.0 Audio device: Intel Corporation 8 Series/C220 Series Chipset High Definition Audio Controller (rev 04) Subsystem: ASUSTeK Computer Inc. Device 11cd Flags: bus master, fast devsel, latency 0, IRQ 53 Memory at f7a10000 (64-bit, non-prefetchable) [size=16K] Capabilities: [50] Power Management version 2 Capabilities: [60] MSI: Enable+ Count=1/1 Maskable- 64bit+ Capabilities: [70] Express Root Complex Integrated Endpoint, MSI 00 Capabilities: [100] Virtual Channel PS info :~$ amixer -c 0 Simple mixer control 'IEC958',0 Capabilities: pswitch pswitch-joined Playback channels: Mono Mono: Playback [on] Simple mixer control 'IEC958',1 Capabilities: pswitch pswitch-joined Playback channels: Mono Mono: Playback [on] Simple mixer control 'IEC958',2 Capabilities: pswitch pswitch-joined Playback channels: Mono Mono: Playback [on] :~$ pacmd dump-volumes Welcome to PulseAudio! Use "help" for usage information. Sink 0: reference = 0: 76% 1: 76%, real = 0: 76% 1: 76%, soft = 0: 100% 1: 100%, current_hw = 0: 76% 1: 76%, save = yes Input 8: volume = 0: 100% 1: 100%, reference_ratio = 0: 100% 1: 100%, real_ratio = 0: 100% 1: 100%, soft = 0: 100% 1: 100%, volume_factor = 0: 100% 1: 100%, volume_factor_sink = 0: 100% 1: 100%, save = no Source 0: reference = 0: 100% 1: 100%, real = 0: 100% 1: 100%, soft = 0: 100% 1: 100%, current_hw = 0: 100% 1: 100%, save = no Source 1: reference = 0: 16% 1: 16%, real = 0: 16% 1: 16%, soft = 0: 100% 1: 100%, current_hw = 0: 16% 1: 16%, save = yes

    Read the article

  • How to force remove a package if dpkg removal script fails?

    - by fodon
    I'm trying to remove a package where I deleted the /etc/init.d/disco-master file (in an attempt to remove the package manually). I want to remove the disco-master package. How do I do this now? This is what happens when I do sudo apt-get remove disco-master: removing disco-master ... invoke-rc.d: unknown initscript, /etc/init.d/disco-master not found. dpkg: error processing disco-master (--remove): subprocess installed pre-removal script returned error exit status 100 Errors were encountered while processing: disco-master E: Sub-process /usr/bin/dpkg returned an error code (1) When I do sudo apt-get install --reinstall disco-master I get the following: You might want to run 'apt-get -f install' to correct these: The following packages have unmet dependencies: disco-master : Depends: disco-node (= 0.4.2+nmu1) but it is not going to be installed E: Unmet dependencies. Try 'apt-get -f install' with no packages (or specify a solution). When I do sudo apt-get -f install I get this: Unpacking disco-node (from .../disco-node_0.4.2+nmu1_amd64.deb) ... dpkg: error processing /var/cache/apt/archives/disco-node_0.4.2+nmu1_amd64.deb (--unpack): trying to overwrite '/usr/lib/disco/master/ebin/disco.app', which is also in package disco-master 0.4.1 No apport report written because MaxReports is reached already dpkg-deb: error: subprocess paste was killed by signal (Broken pipe) Errors were encountered while processing: /var/cache/apt/archives/disco-node_0.4.2+nmu1_amd64.deb E: Sub-process /usr/bin/dpkg returned an error code (1) When I run sudo apt-get remove disco-node I get the following: Package disco-node is not installed, so not removed You might want to run 'apt-get -f install' to correct these: The following packages have unmet dependencies: disco-master : Depends: disco-node (= 0.4.1) but it is not going to be installed Depends: python-disco (= 0.4.1) but 0.4.2+nmu1 is to be installed E: Unmet dependencies. Try 'apt-get -f install' with no packages (or specify a solution). When I did sudo dpkg -P --force-all disco-master I got: Removing disco-master ... invoke-rc.d: unknown initscript, /etc/init.d/disco-master not found. dpkg: error processing disco-master (--purge): subprocess installed pre-removal script returned error exit status 100 Errors were encountered while processing: disco-master

    Read the article

  • John Hitchcock of Pace Describes the Oracle Agile PLM Customer Experience

    John Hitchcock, Senior Manager of Configuration Management at Pace (formerly 2Wire, Inc.), sat down for an interview during Oracle's Innovation Summit with Kerrie Foy, Manager of PLM Product Marketing at Oracle. Learn why his organization upgraded to the latest version of Agile and expanded the footprint to achieve impressive savings and productivity gains across the global, networked product value-chain.

    Read the article

  • Learn more about SPARC by listening to our newly recorded podcasts

    - by Cinzia Mascanzoni
    Please listen to our newly recorded series of four podcasts focused on SPARC. The topics are: How SPARC T4 Servers Open New Opportunities SPARC Roadmap and SPARC T4 Architecture Highlights SPARC T4 For Installed Base Refresh and Consolidation SPARC T4 – How Does it Stack up Against the Competition? Rob Ludeman, from SPARC Product Management, and Thomas Ressler, WWA&C Alliances Consultant, are your hosts. The intent is to continue to help you understand how to position and sell SPARC/T4 into your customer architecture.Details on how to access these podcasts can be found here.

    Read the article

  • John Hitchcock of Pace Describes the Oracle Agile PLM Customer Experience

    John Hitchcock, Senior Manager of Configuration Management at Pace (formerly 2Wire, Inc.), sat down for an interview during Oracle's Innovation Summit with Kerrie Foy, Manager of PLM Product Marketing at Oracle. Learn why his organization upgraded to the latest version of Agile and expanded the footprint to achieve impressive savings and productivity gains across the global, networked product value-chain.

    Read the article

  • Outstanding SQL Saturday

    - by merrillaldrich
    I had the privilege to attend the SQL Saturday held in Redmond today, and it was really outstanding. Among the many sessions, I especially enjoyed and took a lot of useful information away from Greg Larsen’s Dynamic Management Views session, Kalen Delaney’s Compression Session – I am planning to implement 2008 Enterprise compression on my company’s data warehouse later this year – Remus Rusanu’s session on Service Broker to process NAP data, and Matt Masson’s presentation on high performance SSIS...(read more)

    Read the article

  • Find files containing a string on the whole filesystem

    - by Fabio
    I need to find all the instances of a given string in the whole filesystem, because I don't remember in which configuration files, script or any other programs I put it and I need to update that string with a new one. I tried with the following command `grep -nr 'needle' / --exclude-dir=.svn | mail [email protected] -s 'References on xxx' If I run this command on a small directory it gives me the output I need in the form /path1/:nn:line containing needle /path2/:nn:line containing needle where /path1 is the full path of the file, nn is the row containing the needle and last field is the content of the line. However when I run the command on the root directory the grep process hang after a while. I run this script about 8 hours ago and even on a small filesystem (less than 5GB) it doesn't end and if I run top or ps the process seems sleeping root 24909 0.0 0.1 3772 1520 pts/1 S+ Feb10 0:15 grep -nr needle / --exclude-dir=.svn Why it doesn't end? Is there any better way to do this (it's a one time job, I don't need to execute this more than once) Thanks.

    Read the article

  • Updating physics for animated models

    - by Mathias Hölzl
    For a new game we have do set up a scene with a minimum of 30 bone animated models.(shooter) The problem is that the update process for the animated models takes too long. Thats what I do: Each character has ~30 bones and for every update tick the animation gets calculated and every bone fires a event with the new matrix. The physics receives the event with the new matrix and updates the collision shape for that bone. The time that it takes to build the animation isn't that bad (0.2ms for 30 Bones - 6ms for 30 models). But the main problem is that the physic engine (Bullet) uses a diffrent matrix for transformation and so its necessary to convert it. Code for matrix conversion: (~0.005ms) btTransform CLEAR_PHYSICS_API Mat_to_btTransform( Mat mat ) { btMatrix3x3 bulletRotation; btVector3 bulletPosition; XMFLOAT4X4 matData = mat.GetStorage(); // copy rotation matrix for ( int row=0; row<3; ++row ) for ( int column=0; column<3; ++column ) bulletRotation[row][column] = matData.m[column][row]; for ( int column=0; column<3; ++column ) bulletPosition[column] = matData.m[3][column]; return btTransform( bulletRotation, bulletPosition ); } The function for updating the transform(Physic): void CLEAR_PHYSICS_API BulletPhysics::VKinematicMove(Mat mat, ActorId aid) { if ( btRigidBody * const body = FindActorBody( aid ) ) { btTransform tmp = Mat_to_btTransform( mat ); body->setWorldTransform( tmp ); } } The real problem is the function FindActorBody(id): ActorIDToBulletActorMap::const_iterator found = m_actorBodies.find( id ); if ( found != m_actorBodies.end() ) return found->second; All physic actors are stored in m_actorBodies and thats why the updating process takes to long. But I have no idea how I could avoid this. Friendly greedings, Mathias

    Read the article

  • Open a terminal window & run command, then close the terminal window if command completed successfully?

    - by Caspar
    I'm trying to write a script to do the following: Open a terminal window which runs a long running command (Ideally) move the terminal window to the top left corner of the screen using xdotool Close the terminal window only if the long running command exited with a zero return code To put it in Windows terms, I'd like to have the Linux equivalent of start cmd /c long_running_cmd if long_running_cmd succeeds, and do the equivalent of start cmd /k long_running_cmd if it fails. What I have so far is a script which starts xterm with a given command, and then moves the window as desired: #!/bin/bash # open a new terminal window in the background with the long running command xterm -e ~/bin/launcher.sh ./long_running_cmd & # move the terminal window (requires window process to be in background) sleep 1 xdotool search --name launcher.sh windowmove 0 0 And ~/bin/launcher.sh is intended to run whatever is passed as a command line argument to it: #!/bin/bash # execute command line arguments $@ But, I haven't been able to get the xterm window to close after long_running_cmd is done. I think something like xterm -e ~/bin/launcher.sh "./long_running_cmd && kill $PPID" & might be what I'm after, so that xterm is launched in the background and it runs ./long_running_cmd && kill $PPID. So the shell in the xterm window then runs the long running command and if it completes successfully, the parent process of the shell (i.e. the process owning the xterm window) is killed, thereby closing the xterm window. But, that doesn't work: nothing happens, so I suspect my quoting or escaping is incorrect, and I haven't been able to fix it. An alternate approach would be to get the PID of long_running_cmd, use wait to wait for it to finish, then kill the xterm window using kill $! (since $! refers to last task started in the background, which will be the xterm window). But I can't figure out a nice way to get the PID & exit value of long_running_cmd out of the shell running in the xterm window and into the shell which launched the xterm window (short of writing them to a file somewhere, which seems like it should be unnecessary?). What am I doing wrong, or is there an easier way to accomplish this?

    Read the article

  • Context Sensitive History. Part 1 of 2

    A Desktop and Silverlight user action management system, with undo, redo, and repeat. Allowing actions to be monitored, and grouped according to a context (such as a UI control), executed sequentially or in parallel, and even to be rolled back on failure.

    Read the article

  • Midsize InDepth Newsletter - Simplify and Modernize Your Business with Cloud Solutions

    - by Roxana Babiciu
    Read the Oracle Midsize InDepth Newsletter feature articles to read the latest Dynamic Market Report on real world adoption of cloud applications at midsize organizations, hear from Talent Management expert and evangelist Pamela Stroko on the current state of employee engagement, and find out how midsize companies adopt Oracle WebLogic Server on Oracle Database Appliance. Plus new research reports, videos, success stories and the latest midsize news.

    Read the article

  • BizTalk 2009 - SQL Server Job Configuration

    - by StuartBrierley
    Following the installation of Biztalk Server 2009 on my development laptop I used the BizTalk Server Best Practice Analyser which highlighted the fact that two of the SQL Server Agent jobs that BizTalk relies on were not running successfully.  Upon investigation it turned out that these jobs needed to be configured before they would run successfully. To configure these jobs open SQL Server Management Studio, expand SQL Server Agent > Jobs and double click on the appropriate job.  Select Steps and then edit the appropriate entries. Backup BizTalk Server (BizTalkMgmtDb) This job is comprised of three steps BackupFull, MarkAndBackupLog and ClearBackupHistory. BackupFull exec [dbo].[sp_BackupAllFull_Schedule] ‘d’ /* Frequency */,‘BTS’ /* Name */,‘<destination path>’ /* location of backup files */ The frequency here is set/left as daily The name is left as BTS You must provide a full destination path for the backup files to be stored. There are also two optional parameters: A flag that controls if the job forces a full backup if a partial backup fails A parameter to control the time of day to run the full backup; the default is midnight UTC time For example: exec [dbo].[sp_BackupAllFull_Schedule] ‘d’ /* Frequency */,‘BTS’ /* Name */,‘<destination path>’ /* location of backup files */ , 0, 22 MarkAndBackUpLog exec [dbo].[sp_MarkAll] ‘BTS’ /* Log mark name */,’<destination path>’  /*location of backup files */ You must provide a destination path for the log backups. Optionally you can also add an extra parameter that tells the procedure to use local time: exec [dbo].[sp_MarkAll] ‘BTS’ /* Log mark name */,’<destination path>’  /*location of backup files */ ,1 Clear Backup History exec [dbo].[sp_DeleteBackupHistory] @DaysToKeep=7 This will clear out the instances in the MarkLog table older than 7 days.    DTA Purge and Archive (BizTalkDTADb) This job is comprised of a single step. Archive and Purge exec dtasp_BackupAndPurgeTrackingDatabase 0, --@nLiveHours tinyint, 1, --@nLiveDays tinyint = 0, 30, --@nHardDeleteDays tinyint = 0, null, --@nvcFolder nvarchar(1024) = null, null, --@nvcValidatingServer sysname = null, 0 --@fForceBackup int = 0 Any completed instance that is older than the live days plus live hours will be deleted, as will any associated data. Any data older than the HardDeleteDays will be deleted - this means that those long running orchestration instances that would otherwise never be purged will at some point have their data cleared down while allowing the instance to continue, thus preventing the DTA databse from growing indefinitely.  This should always be greater than the soft purge window. The NVC folder is the path for the backup files, if this is null the job will not run failing with the error : DTA Purge and Archive (BizTalkDTADb) Job failed SQL Server Management Studio, job activity monitor, view history The @nvcFolder parameter cannot be null. Archive and Purge step How long you choose to keep instances in the Tracking Database is really up to you. For development I have set this up as: exec dtasp_BackupAndPurgeTrackingDatabase 0, 1, 30, ’<destination path>’, null, 0 On a live server you may want to adjust these figures: exec dtasp_BackupAndPurgeTrackingDatabase 0, 15, 20, ’<destination path>’, null, 0

    Read the article

  • Archbeat Link-O-Rama Top 10 Facebook Faves for October 13-19, 2013

    - by OTN ArchBeat
    The list below represents that Top 10 most popular items shared on the OTN ArchBeat Facebook Page for the week of October 13-19, 2013, as determined by the clicks, likes, and other activities among the 4,425 fans of that page. Going Mobile with ADF – Implementing Data Caching and Syncing for Working Offline | Steven Davelaar Oracle Fusion Middleware A-Team solution architect Steven Davelaar takes you on a deep dive into how to use ADF Mobile to create an on-device application that supports working in offline mode. OOW 2013 Summary for Fusion Middleware Architects & Administrators | Simon Haslam Oracle ACE Director Simon Haslam shares a very thorough and detailed summary of the most interesting news coming out of Oracle OpenWorld 2013 for Fusion Middleware architects and administrators. Coherence Special Interest Group (SIG) – Sydney, October 24th If you're in the neighborhood... The Coherence Special Interest Group (SIG) in Sydney, Australia will be held on Thursday October 24th at the Park Hyatt Sydney, in The Rocks, between 9am and 5pm. The event will include presentations from customers, partners, and Coherence engineering team members and product managers. Click the link for more info. Free eBook: Oracle Multitenant for Dummies Oracle Multitenant for Dummies is a new e-book that provides a clear overview of the Oracle Database 12c multitenant architecture. It's free (registration required). Oracle BI Apps 11.1.1.7.1 – GoldenGate Integration - Part 1: Introduction | Michael Rainey Michael Rainey launches a series of posts that guide you through "the architecture and setup for using GoldenGate with OBIA 11.1.1.7.1." Enriching XMLType data using relational data – XQuery and fn:collection in action | Lucas Jellema Another detailed technical post from the always prolific Oracle ACE Director Lucas Jellema. Webgate Reverse Proxy Farm | Vinay Kalra Vinay Kalra's blog post discusses architecture and recommendations for centralizing Webgate deployments onto a server farm. Free Poster: Adaptive Case Management in Practice Thanks to Masons of SOA member Danilo Schmiedel for providing a hi-res copy of the Adaptive Case Management poster, now available for download from the OTN ArchBeat Blog. Should your team use a framework? | Sten Vesterli "Some developers have an aversion to frameworks, feeling that it will be faster to just write everything themselves," observes Oracle ACE Director Sten Vesterli. He explains why that's a very bad idea in this short post. Integrating Custom BPM Worklist into WebCenter Portal | Andrejus Baranovskis Oracle ACE Director Andrejus Baranovskis shares a sample application configured to run a custom BPM Worklist, and shares steps describing how to configure and access it from the WebCenter Portal. Thought for the Day "Morning comes whether you set the alarm or not." — Ursula K. Le Guin (Born October 21, 1929) Source: brainyquote.com

    Read the article

  • How to install Windows (x86/x64) on Linux (Ubuntu)

    - by yorrany
    I installed Ubuntu edition (10.04) on my windows 7, completely eliminating it to the original installation. After I was forced to reverse the process, but could not find tools or explanations of how to do it. To clarify the equipment, it is: a netbook, acer, no optical drive cd / dvd, the process should be fully via USB. I hope I was clear enough, count on the support of you. Thank you. -- Instalei a edição Ubuntu (10.04) sobre meu Windows 7, eliminando completamente a a instalação original. Depois fui forçado à reverter o processo, mas não encontrei ferramentas ou explicações de como fazê-lo. Para esclarecer sobre o equipamento, trata-se de: um netbook, acer, sem leitor óptico de cd/dvd, o processo deverá ser totalmente via USB. Espero ter sido bastante claro, conto com o suporte de vocês. Muito obrigado.

    Read the article

  • How to install Windows (x86/x64) on Linux (Ubuntu)

    - by yorrany
    I installed Ubuntu edition (10.04) on my windows 7, completely eliminating it to the original installation. After I was forced to reverse the process, but could not find tools or explanations of how to do it. To clarify the equipment, it is: a netbook, acer, no optical drive cd / dvd, the process should be fully via USB. I hope I was clear enough, count on the support of you. Thank you. -- Instalei a edição Ubuntu (10.04) sobre meu Windows 7, eliminando completamente a a instalação original. Depois fui forçado à reverter o processo, mas não encontrei ferramentas ou explicações de como fazê-lo. Para esclarecer sobre o equipamento, trata-se de: um netbook, acer, sem leitor óptico de cd/dvd, o processo deverá ser totalmente via USB. Espero ter sido bastante claro, conto com o suporte de vocês. Muito obrigado.

    Read the article

  • Next Generation Directory @ Oracle Open World

    - by Etienne Remillon
    Oracle OpenWorld 2012 is bigger, better, and more educational than ever before, and identity management activities are no exception. For all identity related activities check this entry, or this handy PDF. Do you focus more specifically on directory?Come and meet with the directory team at: Our session: Next Generation Directory: Oracle Unified Directory / session #CON946 / Tuesday Oct 2 5:00 pm / Moscone West L3, Room 3008 Our demo pod: Oracle Directory Services Plus: Performant, Cloud-Ready demo / Moscone South, Right - S-222 Demonstration Hours @ Moscone South: Mon 10:00 - 6:00 / Tues 09:45 - 6:00 / Wed 09:45 – 4:00

    Read the article

  • Google Chrome Extensions: Launch Event (part 1)

    Google Chrome Extensions: Launch Event (part 1) Video Footage from the Google Chrome Extensions launch event on 12/09/09. In this part, Brian Rakowski, product management director, provides an update on Google Chrome and explains why extensions are important for the Google Chrome team. From: GoogleDevelopers Views: 5167 17 ratings Time: 04:39 More in Science & Technology

    Read the article

  • Why is my root filesystem always scanned at boot?

    - by luri
    I always have a pause at boot saying my filesystems are being checked (with a "press C to cancel" note, too). Actually (seeing boot.log) I think it's the / fs, which is located at /dev/sdb5 Several questions altoghether, here (hope this does not break any rule): Is this normal? Can I (or even should I) prevent this anyhow? According to boot.log (below) the fs does not seem to be 'clean', or, at least, it's in an state or condition that makes fsck always can it for errors for a while (just a few seconds). How can I fix it? Edit: This is my boot.log: fsck desde util-linux-ng 2.17.2 udevd[515]: can not read '/etc/udev/rules.d/z80_user.rules' /dev/sdb5: 249045/32841728 ficheros (0.3% no contiguos), 20488485/131338752 bloques init: ureadahead-other main process (1111) terminated with status 4 init: ureadahead-other main process (1116) terminated with status 4 Password: * Starting AppArmor profiles [160G Skipping profile in /etc/apparmor.d/disable: usr.bin.firefox [154G[ OK ] * Setting sensors limits [160G [154G[ OK ] And this is dumpe2fs results for the filesystem being checked (well, the relevant part of the log): Filesystem volume name: <none> Last mounted on: / Filesystem UUID: 42509bf9-f3e6-460a-8947-ec0f5c1fbcc8 Filesystem magic number: 0xEF53 Filesystem revision #: 1 (dynamic) Filesystem features: has_journal ext_attr resize_inode dir_index filetype needs_recovery extent flex_bg sparse_super large_file huge_file uninit_bg dir_nlink extra_isize Filesystem flags: signed_directory_hash Default mount options: (none) Filesystem state: clean Errors behavior: Continue Filesystem OS type: Linux Inode count: 32841728 Block count: 131338752 Reserved block count: 6566937 Free blocks: 110850356 Free inodes: 32592701 First block: 0 Block size: 4096 Fragment size: 4096 Reserved GDT blocks: 992 Blocks per group: 32768 Fragments per group: 32768 Inodes per group: 8192 Inode blocks per group: 512 Flex block group size: 16 Filesystem created: Fri Dec 10 19:44:15 2010 Last mount time: Mon Feb 14 17:00:02 2011 Last write time: Mon Feb 14 16:59:45 2011 Mount count: 1 Maximum mount count: 33 Last checked: Mon Feb 14 16:59:45 2011 Check interval: 15552000 (6 months) Next check after: Sat Aug 13 17:59:45 2011 Lifetime writes: 331 GB Reserved blocks uid: 0 (user root) Reserved blocks gid: 0 (group root) First inode: 11 Inode size: 256 Required extra isize: 28 Desired extra isize: 28 Journal inode: 8 First orphan inode: 28049496 Default directory hash: half_md4 Directory Hash Seed: d3d24459-514b-4413-b840-e970b766095b Journal backup: inode blocks Journal features: journal_incompat_revoke Tamaño de fichero de transacciones: 128M Journal length: 32768 Journal sequence: 0x0005e0c4 Journal start: 1 This is the relevant (at least I think this is the fs being checked) line in fstab: #Entry for /dev/sdb5 : UUID=42509bf9-f3e6-460a-8947-ec0f5c1fbcc8 / ext4 errors=remount-ro 0 1

    Read the article

  • 10gR2 Transportable Tablespaces Certified for EBS 11i

    - by Steven Chan
    Database migration across platforms of different "endian" (byte ordering) formats using the Cross Platform Transportable Tablespaces (XTTS) process is now certified for Oracle E-Business Suite Release 11i (11.5.10.2) with Oracle Database 10g Release 2.  This process is sometimes also referred to as transportable tablespaces (TTS).What is the Cross-Platform Transportable Tablespace Feature?The Cross-Platform Transportable Tablespace feature allows users to move a user tablespace across Oracle databases. It's an efficient way to move bulk data between databases. If the source platform and the target platform are of different endianness, then an additional conversion step must be done on either the source or target platform to convert the tablespace being transported to the target format. If they are of the same endianness, then no conversion is necessary and tablespaces can be transported as if they were on the same platform.Moving data using transportable tablespaces can be much faster than performing either an export/import or unload/load of the same data. This is because transporting a tablespace only requires the copying of datafiles from source to the destination and then integrating the tablespace structural information. You can also use transportable tablespaces to move both table and index data, thereby avoiding the index rebuilds you would have to perform when importing or loading table data.

    Read the article

  • TIME column in TOP command for mysql

    - by michael
    When I run top on my database server I get that mysqld has been running for 4:00.51 and it continues to go up. I assume this means that one process with mysql has been running this long from other posts on here. Its not set to cumulative mode as best I can tell as the heading looks like it would change to CTIME if that be the case. What I'm wondering is if this is normal for a site that makes a lot of individual connections using PHP. I shouldn't have any long running processes that would hold on to a mysql connection for this long, only seconds at most. Am I incorrect to assume that this time relates to one connection/process running? I think usually I see it flash up and away on the TOP, not just stay there with this number increasing.

    Read the article

  • Where Are You on the Visualization Maturity Curve?

    - by Celine Beck
    The old phrase “A picture is worth a thousand words” is as true now as ever. Providing the right users with access to the right product data, at the right time, can provide significant benefits to a business. This is especially evident with increasing technical and product complexities, elongated supply chains, and growing pressure to bring innovative products to market faster. With this in mind, it is easy to understand why visualization is an integral part of any successful product lifecycle management (PLM) strategy. At a bare minimum, knowledge workers use multiple individual documents of different formats and structure, and leverage visualization solutions to access information; but the real value of visualization can be fully reaped when it is connected to enterprise applications like PLM and tied to the appropriate business context. The picture below illustrates this visualization maturity curve, as we presented during the last Oracle Open World and the transformational effect that visualization can have on PLM processes and performance (check out the post about AutoVue Key Highlights from Oracle Open World 2012 for more information). Organizations are likely to see greater positive impact on business performance when visualization is connected to enterprise systems, allowing access to information coming from multiple sources, such as PLM, supply chain management (SCM) and enterprise resource planning (ERP). This allows organizations to reach higher levels of collaboration and optimize decision-making capacity as users can benefit from in-context access to visual information. For instance, within a PLM system, a design engineer can access a product assembly and review digital annotations added by other users specific to the engineering change request he is reviewing rather than all historical annotations. The last stage on the curve is what we call augmented business visualization (ABV).  ABV is an innovative framework which lets structured data (from Oracle’s Agile PLM for instance) interact with unstructured data (documents, design, 3D models, etc). With this new level of integration, information coming from multiple sources can be presented in a highly visual fashion; color displays can be used in order to identify parts with specific characteristics (for example pending quality issues) and you can take actions directly from within the context of documents and designs, maximizing user productivity. Those who had the chance to attend our PLM session during Oracle Open World already got a sneak peek of our latest augmented business visualization for Oracle’s Agile PLM. The solution generated a lot of wows. Stephen Porter, CEO at Zero Wait State, indicated in a post entitled “The PLM State: the Manhattan Project-Oracle’s Next Big Secret Weapon” that “this kind of synergy between visualization and PLM could qualify as a powerful weapon differentiating Agile PLM from other solutions.” If you are interested in learning more about ABV for Oracle’s Agile PLM and hear about real examples of usage of visualization at all stages of the visualization maturity curve, don’t miss our Visual Decision Making to Optimize New Product Development and Introduction session during the Oracle Value Chain Summit (Feb. 4-6, 2013, San Francisco). We look forward to seeing you there!

    Read the article

  • How can I log all traffic with its exact length?

    - by Legate
    I want to process all packets with their size going through our gateway server (running Debian 4.0). My idea is to use tcpdump, but I have two questions. The command I'm currently thinking of is tcpdump -i iface -n -t -q. Is it guaranteed that tcpdump will process all packets? What happens if the CPU is working to full capacity? The format of the output lines is IP ddd.ddd.ddd.ddd.port > ddd.ddd.ddd.ddd.port: tcp 1260. What exactly is 1260? I have the suspicion that it is the payload in bytes of the packet, which would be exactly what I need, but I'm not sure. It might be the TCP Window Size. Or perhaps there is an even better way of doing this? I thought about a LOG rule in iptables, but tcpdump seems easier and I don't know whether iptables can log the packet lengths.

    Read the article

  • How can I upgrade my server's kernel without rebooting?

    - by Oli
    This is a loaded question because I'm already aware of, and am very interested in ksplice. The problem is that since they were bought by Oracle, they have been forced to pull numerous server distributions from the offerings. The answer isn't as simple as it once was. I noticed a question on Unix.SE that states: You can build your own ksplice patches to dynamically load into your own kernel Great! But how?! I've installed the free ksplice package in the repo on my desktop (not ksplice-uptrack which is non-free) and now want to generate and apply updates. What's the process? Are there any scripts out there to automate the process? Moreover, if all the machinery required for rebootless upgrades is sitting there in the kernel (and ksplice package), why on earth aren't we taking advantage of it by default? Note 1: I am happy for a solution beside ksplice but it has to deliver the same thing: rolling updates to the kernel that can be applied without rebooting the server. Note 2: I'll say it again; the main ksplice "service" does not support Ubuntu Server. It used to but it doesn't any more. When I talk about wanting to use ksplice, I'm talking about the open source tools in the ksplice package. Any answer that talks about ksplice-uptrack is probably not what I'm after as this is the part that integrates directly with aforementioned "service".

    Read the article

< Previous Page | 427 428 429 430 431 432 433 434 435 436 437 438  | Next Page >