Search Results

Search found 24992 results on 1000 pages for 'oracle enterprise manager'.

Page 305/1000 | < Previous Page | 301 302 303 304 305 306 307 308 309 310 311 312  | Next Page >

  • BI Applications 7.9.6.3 and EBS 12.1.3 Vision: Integrated Demo Environments

    - by Mike.Hallett(at)Oracle-BI&EPM
    If you need a combined BI-Applications + eBusiness Suite Applications demonstration environment, or for proof of concept work for your customers, then these versions of images on Oracle Virtual Box are now available for partners to download and use.  To get access to these images, Partners must be OPN members, specialised in OBI or BI-Apps.   This is an integrated Demo/Test Drive/POC/Self Enablement environment including two separate images (in English) representing the entire Oracle Stack – Applications, Middleware, Database, Operating System and Virtual Machine. Minimum Hardware requirements for each image to run separately 4GB RAM Minimum Hardware requirements for both images to run concurrently 8GB RAM Dual CPU 64 bit OS   BI Applications 7.9.6.3 Linux based and running on Oracle Virtual Box and compatible with OVM Image Content: BI Application Analytics demo data extracted from EBS 12.1.3 Vision for Financials and HR using EBS 12.1.3 Vision (image supplied) Built Integration to EBS 12.1.3 Vision image (provided). Fully functional BI Applications 7.9.6.3 software install and configuration Image can be connected to load any data from any other compatible source system. BI Apps Demo data is based on OOTB EBS Vision 12.1.3 Configured to run BI Apps data load for all other modules of EBS 12.1.3 Vision. Includes OBIEE Sample demonstration content Documented scripts for running presentations, demonstrations and Test Drives Image Size: 34GB zip, 84GB unzip.  Min Hardware 4GB RAM         EBS Vision 12.1.3 Linux based and running on Oracle Virtual Box and compatible with OVM Image Content: eBusiness Suite (EBS) Applications Vision 12.1.3 Standard Vision instance with all given setups, configurations and data Source system for BI Apps 7.9.6.3 Image Size: 76GB zip, 300GB unzip.  Min Hardware 4GB RAM Distribution: The Virtual Box images are posted on an external FTP server @ BI Applications 7.9.6.3 EBS12.1.3   To download, Partners need to request the current password to access the images.  To request the current ftp.oracle.com password and the password required to unzip the images, please email Marek Winiarski   Support Contact =  Marek Winiarski: Oracle Partner Solution Consultant

    Read the article

  • New Packt Books: APEX & JRockit

    - by [email protected]
      I have received these 2 ebooks from Packt Publkishing and I am currently reviewing them. Both of them look great so far.   Oracle Application Express 3.2 - The Essentials and More First of all, I have to mention that I am new to APEX. I was interested on this product which is a development tool for Web applications on the Oracle Database. As I support JDeveloper and ADF, which are products that work very closely with the Oracle Database and are a rapid development tool as well, it is always interesting and useful to know complementary tools. APEX looks very useful and the book includes many working examples. A more complete review of this book is coming soon. Further information about this book can be seen at Packt.   Oracle JRockit: The Definitive Guide Many of our Oracle Coherence customers run their caches and clusters using JRockit. This JVM has helped us to solve lots of Service Requests. It is a really reliable, fast and stable JVM. It works great on both development and production environments with big amounts of data, concurrency, multi-threading and many other factors that can make a JVM crash. I must also mention JRockit Mission Control (JRMC), which is a great tool for management and monitoring. I really recommend it. As a matter of fact, some months ago, I created a document entitled "How to Monitor Coherence-Based Applications using JRockit Mission Control" (Doc Id 961617.1) on My Oracle Support. Also, the JRockit Runtime Analyzer (JRA) and it successor of newer versions, the JRockit Flight Recorder (JFR) are deeply reviewed. This book contains very clear and complete information about all this and more. I will post an entry with a more complete review soon (and will probably post an entry about Coherence monitoring with JRMC soon too). Further information about this book can be seen at Packt.  

    Read the article

  • Patch Set Update: Hyperion Essbase 11.1.2.3.502

    - by Paul Anderson -Oracle
    A Patch Set Update (PSU) for Oracle Hyperion Essbase 11.1.2.3.x . The PSU downloads are from the My Oracle Support | Patches & Updates section. Hyperion Essbase Server 11.1.2.3.502 Patch 18950479: Essbase Server Hyperion Essbase Client 11.2.3.502 Patch 18950453: Essbase RTC Patch 18950474: Essbase Client Patch 18950482: Essbase MSI Hyperion Essbase Administration Services (EAS) 11.1.2.3.502 Patch 17767626: Essbase Server Patch 17767628: Essbase Console MSI Hyperion Analytic Provider Services (APS) 11.1.2.3.502 Patch 18907738: APS Services Hyperion Essbase Studio 11.1.2.3.502 Patch 18907980: Essbase Studio Server Patch 18907987: Essbase Studio Console MSI Refer to the Readme file prior to proceeding with this PSU implementation for important information that includes a full list of the defects fixed, along with additional support information, prerequisites, details for applying patch and troubleshooting FAQ's. It is important to ensure that the requirements and support paths to this patch are met as outlined within the Readme file. The Readme file is available from the Patches & Updates download screen. To locate the latest Essbase Patch Sets and Patch Set Updates at anytime visit the My Oracle Support (MOS) Knowledge Article: Available Patch Sets and Patch Set Updates for Oracle Hyperion Essbase Doc ID 1396084.1 Why not share your experience about installing this patch ... In the MOS | Patches & Updates screen simply click the "Start a Discussion" and submit your review. The patch install reviews and other patch related information is available within the My Oracle Support Communities. Visit the Oracle Hyperion EPM sub-space: Hyperion Patch Reviews

    Read the article

  • Pluscom MAC address shows up but "wireless unavailable"

    - by Luigi
    I've got a pluscom pci card plugged into my desktop and although the mac address shows up in network manager, it says "Wireless unavailable". I'm using Ubuntu 12.10 Gnome Shell Remix... edgar@edgarQuantal:~$ iwconfig eth0 no wireless extensions. lo no wireless extensions. wlan0 IEEE 802.11bg ESSID:off/any Mode:Managed Access Point: Not-Associated Tx-Power=0 dBm Retry long limit:7 RTS thr:off Fragment thr:off Power Management:off edgar@edgarQuantal:~$ ifconfig eth0 Link encap:Ethernet HWaddr 6c:f0:49:02:1f:b8 inet addr:192.168.0.7 Bcast:192.168.0.255 Mask:255.255.255.0 inet6 addr: fe80::6ef0:49ff:fe02:1fb8/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:1425 errors:0 dropped:0 overruns:0 frame:0 TX packets:1393 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:995753 (995.7 KB) TX bytes:134079 (134.0 KB) lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:363 errors:0 dropped:0 overruns:0 frame:0 TX packets:363 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:40828 (40.8 KB) TX bytes:40828 (40.8 KB) It looks the same as the problem as here, seems quite complicated: http://ubuntuforums.org/showthread.php?t=1993226 Tried downloading the driver but i get this: "Unpacking firmware-ralink (from .../firmware-ralink_0.36_all.deb) ... dpkg: error processing /home/edgar/Downloads/firmware-ralink_0.36_all.deb (--install): trying to overwrite '/lib/firmware/rt2870.bin', which is also in package linux-firmware 1.95"

    Read the article

  • How do I get a rt2800usb wireless device working?

    - by Jii
    My brand new desktop running 13.04 has endless problems with wireless. Dozens of others are flooding forums with reports of the same problems. It worked fine for a few days, then there were a few days where it started having problems sometimes and working sometimes. Now it never works at all. I have 5+ devices all able to connect without any trouble at all, including iPhone, Android phone, 3DS, multiple game consoles, a laptop running windows 7, and even a second desktop machine running Ubuntu 12.04 sitting right behind the 13.04 machine. All other devices have full wireless bars displayed (strong signals). At any moment, one of the following is happening, and it changes randomly: Trying to connect forever, but never establishing a connection. Wireless icon constantly animating. Finds no wireless networks at all. (There are 12+ in range according to other devices.) Will not try to connect to the network. If I use the icon to connect, it will display "Disconnected" within a few seconds. Will continuously ask for the network password. Typing it in correctly does not help. Wireless is working fine. This happens sometimes. It can work for days at a time, or only 10 mins at a time. Various things that usually do nothing but sometimes fix the problem: Reboot. This has the best chance of helping, but it usually takes 5+ times. Disable/re-enable Wi-Fi using the wireless icon. Disable/re-enable Networking using the wireless icon. Use the icon to try and connect to a network (if found). Use the icon to open Edit Connections and delete my connection info, causing it to be recreated (once it's actually found again). Various things that seem to make no difference: Changing between using Linux headers in grub at bootup, between 3.10.0, 3.9.0, or 3.8.0. Move the wireless router very close to the desktop. Running sudo rfkill unblock all (I dunno what this is supposed to do.) I've used Ubuntu for 6 years and I've never had a problem with networking. Now I'm spending all my time reading through endless problem reports and trying all the answers. None of them have helped. I am doing this instead of getting work done, which is defeating the whole purpose of using Ubuntu. It's heartbreaking to be honest. In the current state of "no networks are showing up", here are outputs from the random things that other people are usually asked to run: lspic 00:00.0 Host bridge: Intel Corporation Haswell DRAM Controller (rev 06) 00:01.0 PCI bridge: Intel Corporation Haswell PCI Express x16 Controller (rev 06) 00:14.0 USB controller: Intel Corporation Lynx Point USB xHCI Host Controller (rev 04) 00:16.0 Communication controller: Intel Corporation Lynx Point MEI Controller #1 (rev 04) 00:19.0 Ethernet controller: Intel Corporation Ethernet Connection I217-V (rev 04) 00:1a.0 USB controller: Intel Corporation Lynx Point USB Enhanced Host Controller #2 (rev 04) 00:1b.0 Audio device: Intel Corporation Lynx Point High Definition Audio Controller (rev 04) 00:1c.0 PCI bridge: Intel Corporation Lynx Point PCI Express Root Port #1 (rev d4) 00:1c.2 PCI bridge: Intel Corporation 82801 PCI Bridge (rev d4) 00:1d.0 USB controller: Intel Corporation Lynx Point USB Enhanced Host Controller #1 (rev 04) 00:1f.0 ISA bridge: Intel Corporation Lynx Point LPC Controller (rev 04) 00:1f.2 SATA controller: Intel Corporation Lynx Point 6-port SATA Controller 1 [AHCI mode] (rev 04) 00:1f.3 SMBus: Intel Corporation Lynx Point SMBus Controller (rev 04) 01:00.0 VGA compatible controller: NVIDIA Corporation GF119 [GeForce GT 610] (rev a1) 01:00.1 Audio device: NVIDIA Corporation GF119 HDMI Audio Controller (rev a1) 03:00.0 PCI bridge: ASMedia Technology Inc. ASM1083/1085 PCIe to PCI Bridge (rev 03) lsmod Module Size Used by e100 41119 0 nls_iso8859_1 12713 1 parport_pc 28284 0 ppdev 17106 0 bnep 18258 2 rfcomm 47863 12 binfmt_misc 17540 1 arc4 12573 2 rt2800usb 27201 0 rt2x00usb 20857 1 rt2800usb rt2800lib 68029 1 rt2800usb rt2x00lib 55764 3 rt2x00usb,rt2800lib,rt2800usb coretemp 13596 0 mac80211 656164 3 rt2x00lib,rt2x00usb,rt2800lib kvm_intel 138733 0 kvm 452835 1 kvm_intel cfg80211 547224 2 mac80211,rt2x00lib crc_ccitt 12707 1 rt2800lib ghash_clmulni_intel 13259 0 aesni_intel 55449 0 usb_storage 61749 1 aes_x86_64 17131 1 aesni_intel joydev 17613 0 xts 12922 1 aesni_intel nouveau 1001310 3 snd_hda_codec_hdmi 37407 1 lrw 13294 1 aesni_intel gf128mul 14951 2 lrw,xts mxm_wmi 13021 1 nouveau snd_hda_codec_realtek 46511 1 ablk_helper 13597 1 aesni_intel wmi 19256 2 mxm_wmi,nouveau snd_hda_intel 44397 5 ttm 88251 1 nouveau drm_kms_helper 49082 1 nouveau drm 295908 5 ttm,drm_kms_helper,nouveau snd_hda_codec 190010 3 snd_hda_codec_realtek,snd_hda_codec_hdmi,snd_hda_intel cryptd 20501 3 ghash_clmulni_intel,aesni_intel,ablk_helper snd_hwdep 13613 1 snd_hda_codec snd_pcm 102477 3 snd_hda_codec_hdmi,snd_hda_codec,snd_hda_intel btusb 18291 0 snd_page_alloc 18798 2 snd_pcm,snd_hda_intel snd_seq_midi 13324 0 i2c_algo_bit 13564 1 nouveau snd_seq_midi_event 14899 1 snd_seq_midi snd_rawmidi 30417 1 snd_seq_midi snd_seq 61930 2 snd_seq_midi_event,snd_seq_midi bluetooth 251354 22 bnep,btusb,rfcomm snd_seq_device 14497 3 snd_seq,snd_rawmidi,snd_seq_midi lpc_ich 17060 0 snd_timer 29989 2 snd_pcm,snd_seq mei 46588 0 snd 69533 20 snd_hda_codec_realtek,snd_hwdep,snd_timer,snd_hda_codec_hdmi,snd_pcm,snd_seq,snd_rawmidi,snd_hda_codec,snd_hda_intel,snd_seq_device psmouse 97838 0 microcode 22923 0 soundcore 12680 1 snd video 19467 1 nouveau mac_hid 13253 0 serio_raw 13215 0 lp 17799 0 parport 46562 3 lp,ppdev,parport_pc hid_generic 12548 0 usbhid 47346 0 hid 101248 2 hid_generic,usbhid ahci 30063 3 libahci 32088 1 ahci e1000e 207005 0 ptp 18668 1 e1000e pps_core 14080 1 ptp sudo lshw -c network 00:00.0 Host bridge: Intel Corporation Haswell DRAM Controller (rev 06) 00:01.0 PCI bridge: Intel Corporation Haswell PCI Express x16 Controller (rev 06) 00:14.0 USB controller: Intel Corporation Lynx Point USB xHCI Host Controller (rev 04) 00:16.0 Communication controller: Intel Corporation Lynx Point MEI Controller #1 (rev 04) 00:19.0 Ethernet controller: Intel Corporation Ethernet Connection I217-V (rev 04) 00:1a.0 USB controller: Intel Corporation Lynx Point USB Enhanced Host Controller #2 (rev 04) 00:1b.0 Audio device: Intel Corporation Lynx Point High Definition Audio Controller (rev 04) 00:1c.0 PCI bridge: Intel Corporation Lynx Point PCI Express Root Port #1 (rev d4) 00:1c.2 PCI bridge: Intel Corporation 82801 PCI Bridge (rev d4) 00:1d.0 USB controller: Intel Corporation Lynx Point USB Enhanced Host Controller #1 (rev 04) 00:1f.0 ISA bridge: Intel Corporation Lynx Point LPC Controller (rev 04) 00:1f.2 SATA controller: Intel Corporation Lynx Point 6-port SATA Controller 1 [AHCI mode] (rev 04) 00:1f.3 SMBus: Intel Corporation Lynx Point SMBus Controller (rev 04) 01:00.0 VGA compatible controller: NVIDIA Corporation GF119 [GeForce GT 610] (rev a1) 01:00.1 Audio device: NVIDIA Corporation GF119 HDMI Audio Controller (rev a1) 03:00.0 PCI bridge: ASMedia Technology Inc. ASM1083/1085 PCIe to PCI Bridge (rev 03) sudo iwconfig eth0 no wireless extensions. lo no wireless extensions. wlan0 IEEE 802.11bgn ESSID:off/any Mode:Managed Access Point: Not-Associated Tx-Power=20 dBm Retry long limit:7 RTS thr:off Fragment thr:off Encryption key:off Power Management:on sudo iwlist scan eth0 Interface doesn't support scanning. lo Interface doesn't support scanning. wlan0 No scan results NOTE: This dmesg was done after a reboot where the network manager was continuously displaying the "disconnected" message over and over. So it must have been trying to connect at this time. My network was displayed in the list of options, as the only option despite other devices picking up 12+ access points. The router channel is set to auto. dmesg | tail -30 [ 187.418446] wlan0: associated [ 190.405601] wlan0: disassociated from 00:14:d1:a8:c3:44 (Reason: 15) [ 190.443312] cfg80211: Calling CRDA to update world regulatory domain [ 190.443431] wlan0: deauthenticating from 00:14:d1:a8:c3:44 by local choice (reason=3) [ 190.451635] cfg80211: World regulatory domain updated: [ 190.451643] cfg80211: (start_freq - end_freq @ bandwidth), (max_antenna_gain, max_eirp) [ 190.451648] cfg80211: (2402000 KHz - 2472000 KHz @ 40000 KHz), (300 mBi, 2000 mBm) [ 190.451652] cfg80211: (2457000 KHz - 2482000 KHz @ 20000 KHz), (300 mBi, 2000 mBm) [ 190.451656] cfg80211: (2474000 KHz - 2494000 KHz @ 20000 KHz), (300 mBi, 2000 mBm) [ 190.451659] cfg80211: (5170000 KHz - 5250000 KHz @ 40000 KHz), (300 mBi, 2000 mBm) [ 190.451662] cfg80211: (5735000 KHz - 5835000 KHz @ 40000 KHz), (300 mBi, 2000 mBm) [ 191.824451] wlan0: authenticate with 00:14:d1:a8:c3:44 [ 191.850608] wlan0: send auth to 00:14:d1:a8:c3:44 (try 1/3) [ 191.884604] wlan0: send auth to 00:14:d1:a8:c3:44 (try 2/3) [ 191.886309] wlan0: authenticated [ 191.886579] rt2800usb 3-5.3:1.0 wlan0: disabling HT as WMM/QoS is not supported by the AP [ 191.886588] rt2800usb 3-5.3:1.0 wlan0: disabling VHT as WMM/QoS is not supported by the AP [ 191.889556] wlan0: associate with 00:14:d1:a8:c3:44 (try 1/3) [ 192.001493] wlan0: associate with 00:14:d1:a8:c3:44 (try 2/3) [ 192.040274] wlan0: RX AssocResp from 00:14:d1:a8:c3:44 (capab=0x431 status=0 aid=3) [ 192.044235] wlan0: associated [ 193.948188] wlan0: deauthenticating from 00:14:d1:a8:c3:44 by local choice (reason=3) [ 193.981501] cfg80211: Calling CRDA to update world regulatory domain [ 193.984080] cfg80211: World regulatory domain updated: [ 193.984082] cfg80211: (start_freq - end_freq @ bandwidth), (max_antenna_gain, max_eirp) [ 193.984084] cfg80211: (2402000 KHz - 2472000 KHz @ 40000 KHz), (300 mBi, 2000 mBm) [ 193.984085] cfg80211: (2457000 KHz - 2482000 KHz @ 20000 KHz), (300 mBi, 2000 mBm) [ 193.984085] cfg80211: (2474000 KHz - 2494000 KHz @ 20000 KHz), (300 mBi, 2000 mBm) [ 193.984086] cfg80211: (5170000 KHz - 5250000 KHz @ 40000 KHz), (300 mBi, 2000 mBm) [ 193.984087] cfg80211: (5735000 KHz - 5835000 KHz @ 40000 KHz), (300 mBi, 2000 mBm) The router uses MAC filtering, and security is WPA PSK with cipher as auto. So, any ideas? Or is the solution just to not use 13.04 unless you have a wired connection? (I don't have this option.) If so, please just tell me straight. I survived 9.04 Jaunty, and I can survive 13.04 Raring. Update #1 Results from trying Wild Man's first answer: jii@conan:~$ echo "options rt2800usb nohwcrypt=y" | sudo tee /etc/modprobe.d/rt2800usb.conf options rt2800usb nohwcrypt=y jii@conan:~$ sudo modprobe -rfv rt2800usb rmmod rt2800usb rmmod rt2800lib rmmod crc_ccitt rmmod rt2x00usb rmmod rt2x00lib rmmod mac80211 rmmod cfg80211 jii@conan:~$ sudo modprobe -v rt2800usb insmod /lib/modules/3.10.0-031000-generic/kernel/lib/crc-ccitt.ko insmod /lib/modules/3.10.0-031000-generic/kernel/net/wireless/cfg80211.ko insmod /lib/modules/3.10.0-031000-generic/kernel/net/mac80211/mac80211.ko insmod /lib/modules/3.10.0-031000-generic/kernel/drivers/net/wireless/rt2x00/rt2x00lib.ko insmod /lib/modules/3.10.0-031000-generic/kernel/drivers/net/wireless/rt2x00/rt2800lib.ko insmod /lib/modules/3.10.0-031000-generic/kernel/drivers/net/wireless/rt2x00/rt2x00usb.ko insmod /lib/modules/3.10.0-031000-generic/kernel/drivers/net/wireless/rt2x00/rt2800usb.ko nohwcrypt=y I tried: gksudo gedit /etc/pm/power.d/wireless but I didn't have the package. It said to install gksu. I tried that, but of course, not having Internet, I didn't get the package. So instead I did: sudo gedit /etc/pm/power.d/wireless Which created the file. Here is the body: #!/bin/sh /sbin/iwconfig wlan0 power off I then rebooted. No change. I tried adding exit 0 to the bottom of the wireless file, and rebooted. No change. Please note that this is a desktop machine. I'm assuming power management is primarily for laptops, but the iwconfig does state that power management is on, so who knows. The recommended router changes I did not do, since the current router settings are (I think) required for some of the older devices I have, and because the current settings work on all my modern devices including Ubuntu 12.04 and Windows 7. I do appreciate the advice though, and I'll look into it when I have time. Anything else to try? Update #2 I booted into Ubuntu 12.04.3 from a dvd, and the same problems exist. I have a separate old desktop machine with 12.04 installed that has no wireless problems at all. So obviously the problem is wireless hardware compatibility in both 12.04.03 LTS and 13.04. Update #3 The same problems exist even when using a wired connection. I plugged an ethernet cable directly to the router and the network manager added an "Auto Ethernet" entry, but it cannot establish a connection to it. So the problem is not specific to wireless. Meanwhile, I purchased a Trendnet N300 wireless USB adapter, TEW-664UB. I plugged it in, but I have no idea how to get Ubuntu to try and use it. Can anyone tell me how? Can I download a package on another computer and copy the .deb over to do an install, etc? I'm installing windows 7 to double check that the internet connection works there and it's not just some magically faulty hardware. Thanks for your help.

    Read the article

  • Kubuntu 12.04 - DNS Issues

    - by AndrewJesaitis
    Starting yesterday (6/11/12), I've been having many network problems. When requesting a page in chrome, the page hangs on "Sending request" and then will eventually timeout. I'm within a VPN that has it's own DNS server. I've tried to manually set my DNS through the Network-Manager applet and by editing /etc/network/interfaces. Having no luck I unlinked the resolv.conf file and dumped the contents of my old resolv.conf into it. Again having no luck, I deactivated the dnsmasq server in /etc/NetworkManager/NetworkManager.conf by commenting out the dns=dnsmasq. $ cat NetworkManager.conf [main] plugins=ifupdown,keyfile #dns=dnsmasq no-auto-default=D0:67:E5:EA:B6:6B, [ifupdown] managed=false $ nm-tool NetworkManager Tool State: connected (global) - Device: eth0 [Wired connection 1] ------------------------------------------- Type: Wired Driver: tg3 State: connected Default: yes HW Address: D0:67:E5:EA:B6:6B Capabilities: Carrier Detect: yes Speed: 1000 Mb/s Wired Properties Carrier: on IPv4 Settings: Address: 192.168.254.122 Prefix: 24 (255.255.255.0) Gateway: 192.168.254.2 DNS: 192.168.254.1 What is strange is that the network will work fine for a few minutes then start to timeout. A few minutes later it will work again. I'm unable to hit internal or external sites when it is timing out. When I $dig local sites, I receive no answer. I do receive an answer from google.com. At this point, I would usually blame the DNS Server, especially since when I change to Google's DNS server things work. But, I need to use our internal DNS to hit our internal sites. Nobody else is having issues and they are all using DHCP. This group includes one user who is using 11.04. At this point, I'm at a loss for what to do, so any help would be appreciated.

    Read the article

  • Ubuntu 12.04 suddenly will not boot after compiz crash on lockscreen

    - by schonjones
    Yesterday i went to work and closed my laptop. When i got home I couldn't get the lock screen to come back up. ctrl+alt+f2 to get to a terminal and Tried to kill compiz, compiz was not running so I tried to restart it got an error opening the display so i tried to restart lightdm again couldn't open display. so I restarted and now I am stuck on the loading screen. I tried booting in recovery mode and all options hang except root, but i'm stuck with read only. booting from a live cd I can mount the drive but again stuck with read only. I'm completely stuck here. checking my default-display-manager returns "/usr/sbin/elsa" as the only line and can't edit it. I've got an 11.10 live cd and my /home is encrypted so i can't try to back anything up at the moment either. How can i resolve this or at the least back up some of my data? Please help!

    Read the article

  • Have to run sudo dhclient eth0 automatically every boot

    - by Fyksen
    I just installed ubuntu 12.04.1 alternative install (for raid 0 on some disks). I Have some problems with the net. I'm at school, we use cable, and it got IPv6. If I run ifconfig eth0 heres my output: eth0 Link encap:Ethernet HWaddr e0:cb:4e:87:ff:db inet addr:128.39.194.217 Bcast:128.39.194.223 Mask:255.255.255.224 inet6 addr: 2001:700:1100:8008:e2cb:4eff:fe87:ffdb/64 Scope:Global inet6 addr: fe80::e2cb:4eff:fe87:ffdb/64 Scope:Link inet6 addr: 2001:700:1100:8008:48f7:c23:1d87:da6c/64 Scope:Global UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:1063378 errors:0 dropped:0 overruns:0 frame:0 TX packets:489811 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:1577173461 (1.5 GB) TX bytes:37043669 (37.0 MB) Interrupt:68 Base address:0x6000 My /etc/network/interfaces look like this: # This file describes the network interfaces available on your system # and how to activate them. For more information, see interfaces(5). # The loopback network interface auto lo iface lo inet loopback # The primary network interface auto eth0 # NetworkManager#iface eth0 inet dhcp # NetworkManager#hostname 2001:700:1100:1::4 # This is an autoconfigured IPv6 interface iface eth0 inet6 auto (I had to remove the hash tags, because of the BIGFONT i get on ask ubuntu) The "network manager" says that I'm not connected. Let me know if you need any more information. :)

    Read the article

  • Xmonad Xsession

    - by AntLord
    My user level: noob-ish, so please bear with me I'm running 12.04 LTS. I have installed and, to some extent, configured xmonad 0.10 The "automagically" created xsession for it works fine as it is, but when I login it won't run a startup script I've created and "call from" /usr/share/xsessions/xmonad.desktop, if that's right. I've read pretty much all I could find about .xinitrc and .xsession, I tried that and it somehow messed up the other "sessions", if I'm explaining myself correctly. Had to $unity --reset to have the "main session" working again. Anyway, my question is, how do I autostart xmobar and set a desktop background after login into xmonad's default Xsession? I tried this script, start-xmonad: #!/bin/bash # #I only used one of the following each time I tried, none worked #Also, do I really need the '&'? I know what they're for, but... nitrogen --restore & feh --bg-scale ~/Pictures/picture.png & #Then I want xmobar to start, again do I need the '&'? I know it's for it to run #in the background, but I tried removing the '&' and xmonad still launched xmobar & #Finally, the only thing that seems to work in this script exec xmonad Yes, I made sure I did chomd +x ~/start-xmonad The xmonad.desktop is [Desktop Entry] Name=XMonad Encoding=UTF-8 Comment=Lightweight tiling window manager Exec=/home/myusername/start-xmonad Icon=custom_xmonad_badge.png Type=XSession So, this didn't work, now I'm here. Please help :s thanks

    Read the article

  • Why does Ubuntu keep trying to connect to a WiFi network while plugged into an ethernet

    - by labarna
    My desk is situated at the edge of the range of a wireless signal which I use occasionally (when away from my desk) and is therefore saved in network manager. At my desk, however, I plug into the ethernet cable. While I'm working the computer is constantly trying to join the wireless network and usually failing this results in two annoying behaviors. 1: In gnome shell the network connect and disconnect notices keep popping up at the bottom of the screen and I have to click them to make them disappear (I assume it's been fixed in the next version of gnome). 2: (the worst!) Occasionally the wifi password dialog will pop up and ask for the password to this network (which is already saved). An additionally annoyance is that in gnome shell I'll get two copies of the dialog that I have to cancel, one is gnome shell themed (no window border etc...) and the other is just normal gnome themed. (Sometimes if I've been away from the computer for a while I will have multiple copies of this dialog up as its been trying to connect for a while resulting in at times 20 dialogs to cancel). Note, all the while I've been happily connected to the ethernet and have full network access. This is incredibly annoying and distracting, why doesn't ubuntu stop trying to connect to wifi if I'm on the ethernet (unless I want to broadcast my own network, but that's different)?

    Read the article

  • EM12c: Using the LIST verb in emcli

    - by SubinDaniVarughese
    Many of us who use EM CLI to write scripts and automate our daily tasks should not miss out on the new list verb released with Oracle Enterprise Manager 12.1.0.3.0. The combination of list and Jython based scripting support in EM CLI makes it easier to achieve automation for complex tasks with just a few lines of code. Before I jump into a script, let me highlight the key attributes of the list verb and why it’s simply excellent! 1. Multiple resources under a single verb:A resource can be set of users or targets, etc. Using the list verb, you can retrieve information about a resource from the repository database.Here is an example which retrieves the list of administrators within EM.Standard mode$ emcli list -resource="Administrators" Interactive modeemcli>list(resource="Administrators")The output will be the same as standard mode.Standard mode$ emcli @myAdmin.pyEnter password :  ******The output will be the same as standard mode.Contents of myAdmin.py scriptlogin()print list(resource="Administrators",jsonout=False).out()To get a list of all available resources use$ emcli list -helpWith every release of EM, more resources are being added to the list verb. If you have a resource which you feel would be valuable then go ahead and contact Oracle Support to log an enhancement request with product development. Be sure to say how the resource is going to help improve your daily tasks. 2. Consistent Formatting:It is possible to format the output of any resource consistently using these options:  –column  This option is used to specify which columns should be shown in the output. Here is an example which shows the list of administrators and their account status$ emcli list -resource="Administrators" -columns="USER_NAME,REPOS_ACCOUNT_STATUS" To get a list of columns in a resource use:$ emcli list -resource="Administrators" -help You can also specify the width of the each column. For example, here the column width of user_type is set to 20 and department to 30. $ emcli list -resource=Administrators -columns="USER_NAME,USER_TYPE:20,COST_CENTER,CONTACT,DEPARTMENT:30"This is useful if your terminal is too small or you need to fine tune a list of specific columns for your quick use or improved readability.  –colsize  This option is used to resize column widths.Here is the same example as above, but using -colsize to define the width of user_type to 20 and department to 30.$ emcli list -resource=Administrators -columns="USER_NAME,USER_TYPE,COST_CENTER,CONTACT,DEPARTMENT" -colsize="USER_TYPE:20,DEPARTMENT:30" The existing standard EMCLI formatting options are also available in list verb. They are: -format="name:pretty" | -format="name:script” | -format="name:csv" | -noheader | -scriptThere are so many uses depending on your needs. Have a look at the resources and columns in each resource. Refer to the EMCLI book in EM documentation for more information.3. Search:Using the -search option in the list verb makes it is possible to search for a specific row in a specific column within a resource. This is similar to the sqlplus where clause. The following operators are supported:           =           !=           >           <           >=           <=           like           is (Must be followed by null or not null)Here is an example which searches for all EM administrators in the marketing department located in the USA.$emcli list -resource="Administrators" -search="DEPARTMENT ='Marketing'" -search="LOCATION='USA'" Here is another example which shows all the named credentials created since a specific date.  $emcli list -resource=NamedCredentials -search="CredCreatedDate > '11-Nov-2013 12:37:20 PM'"Note that the timestamp has to be in the format DD-MON-YYYY HH:MI:SS AM/PM Some resources need a bind variable to be passed to get output. A bind variable is created in the resource and then referenced in the command. For example, this command will list all the default preferred credentials for target type oracle_database.Here is an example$ emcli list -resource="PreferredCredentialsDefault" -bind="TargetType='oracle_database'" -colsize="SetName:15,TargetType:15" You can provide multiple bind variables. To verify if a column is searchable or requires a bind variable, use the –help option. Here is an example:$ emcli list -resource="PreferredCredentialsDefault" -help 4. Secure accessWhen list verb collects the data, it only displays content for which the administrator currently logged into emcli, has access. For example consider this usecase:AdminA has access only to TargetA. AdminA logs into EM CLIExecuting the list verb to get the list of all targets will only show TargetA.5. User defined SQLUsing the –sql option, user defined sql can be executed. The SQL provided in the -sql option is executed as the EM user MGMT_VIEW, which has read-only access to the EM published MGMT$ database views in the SYSMAN schema. To get the list of EM published MGMT$ database views, go to the Extensibility Programmer's Reference book in EM documentation. There is a chapter about Using Management Repository Views. It’s always recommended to reference the documentation for the supported MGMT$ database views.  Consider you are using the MGMT$ABC view which is not in the chapter. During upgrade, it is possible, since the view was not in the book and not supported, it is likely the view might undergo a change in its structure or the data in it. Using a supported view ensures that your scripts using -sql will continue working after upgrade.Here’s an example  $ emcli list -sql='select * from mgmt$target' 6. JSON output support    JSON (JavaScript Object Notation) enables data to be displayed in a collection of name/value pairs. There is lot of reading material about JSON on line for more information.As an example, we had a requirement where an EM administrator had many 11.2 databases in their test environment and the developers had requested an Administrator to change the lifecycle status from Test to Production which meant the admin had to go to the EM “All targets” page and identify the set of 11.2 databases and then to go into each target database page and manually changes the property to Production. Sounds easy to say, but this Administrator had numerous targets and this task is repeated for every release cycle.We told him there is an easier way to do this with a script and he can reuse the script whenever anyone wanted to change a set of targets to a different Lifecycle status. Here is a jython script which uses list and JSON to change all 11.2 database target’s LifeCycle Property value.If you are new to scripting and Jython, I would suggest visiting the basic chapters in any Jython tutorials. Understanding Jython is important to write the logic depending on your usecase.If you are already writing scripts like perl or shell or know a programming language like java, then you can easily understand the logic.Disclaimer: The scripts in this post are subject to the Oracle Terms of Use located here.  1 from emcli import *  2  search_list = ['PROPERTY_NAME=\'DBVersion\'','TARGET_TYPE= \'oracle_database\'','PROPERTY_VALUE LIKE \'11.2%\'']  3 if len(sys.argv) == 2:  4    print login(username=sys.argv[0])  5    l_prop_val_to_set = sys.argv[1]  6      l_targets = list(resource="TargetProperties", search=search_list,   columns="TARGET_NAME,TARGET_TYPE,PROPERTY_NAME")  7    for target in l_targets.out()['data']:  8       t_pn = 'LifeCycle Status'  9      print "INFO: Setting Property name " + t_pn + " to value " +       l_prop_val_to_set + " for " + target['TARGET_NAME']  10      print  set_target_property_value(property_records=      target['TARGET_NAME']+":"+target['TARGET_TYPE']+":"+      t_pn+":"+l_prop_val_to_set)  11  else:  12   print "\n ERROR: Property value argument is missing"  13   print "\n INFO: Format to run this file is filename.py <username>   <Database Target LifeCycle Status Property Value>" You can download the script from here. I could not upload the file with .py extension so you need to rename the file to myScript.py before executing it using emcli.A line by line explanation for beginners: Line  1 Imports the emcli verbs as functions  2 search_list is a variable to pass to the search option in list verb. I am using escape character for the single quotes. In list verb to pass more than one value for the same option, you should define as above comma separated values, surrounded by square brackets.  3 This is an “if” condition to ensure the user does provide two arguments with the script, else in line #15, it prints an error message.  4 Logging into EM. You can remove this if you have setup emcli with autologin. For more details about setup and autologin, please go the EM CLI book in EM documentation.  5 l_prop_val_to_set is another variable. This is the property value to be set. Remember we are changing the value from Test to Production. The benefit of this variable is you can reuse the script to change the property value from and to any other values.  6 Here the output of the list verb is stored in l_targets. In the list verb I am passing the resource as TargetProperties, search as the search_list variable and I only need these three columns – target_name, target_type and property_name. I don’t need the other columns for my task.  7 This is a for loop. The data in l_targets is available in JSON format. Using the for loop, each pair will now be available in the ‘target’ variable.  8 t_pn is the “LifeCycle Status” variable. If required, I can have this also as an input and then use my script to change any target property. In this example, I just wanted to change the “LifeCycle Status”.  9 This a message informing the user the script is setting the property value for dbxyz.  10 This line shows the set_target_property_value verb which sets the value using the property_records option. Once it is set for a target pair, it moves to the next one. In my example, I am just showing three dbs, but the real use is when you have 20 or 50 targets. The script is executed as:$ emcli @myScript.py subin Production The recommendation is to first test the scripts before running it on a production system. We tested on a small set of targets and optimizing the script for fewer lines of code and better messaging.For your quick reference, the resources available in Enterprise Manager 12.1.0.4.0 with list verb are:$ emcli list -helpWatch this space for more blog posts using the list verb and EM CLI Scripting use cases. I hope you enjoyed reading this blog post and it has helped you gain more information about the list verb. Happy Scripting!!Disclaimer: The scripts in this post are subject to the Oracle Terms of Use located here. Stay Connected: Twitter | Facebook | YouTube | Linkedin | Newsletter mt=8">Download the Oracle Enterprise Manager 12c Mobile app

    Read the article

  • IPv6 tunnels - any easy way to turn them on and off?

    - by Rob Hoare
    I've set up a tunnelbroker.net (Hurricane Electric) IPv6 tunnel from my laptop running 12.04. Works fine, and allows me to test the dual-stack configuration on my remote webservers etc. until native IPv6 is available on my ISP. However, there are times when I don't want the tunnel. For example if I'm accessing something that requires an IPv4 address in my own country rather than the Tunnelbroker tunnel endpoint, or if I'm away from the local IPv4 tunnel endpoint, or if I simply want to test without IPv6. Is there a simple way to disable and then re-enable the IPv6 tunnel, without rebooting? For context, here's what's in my /etc/network/interfaces (NNN replaces numbers): auto he-ipv6 iface he-ipv6 inet6 v4tunnel endpoint 216.218.NNN.NNN address 2001:470:NNN:NNN::2 netmask 64 up ip -6 route add default dev he-ipv6 down ip -6 route del default dev he-ipv6 Is there a network manager application (gui or command line) to selectively enable/disable parts of /etc/network/interfaces, or IPv6 in general? I found even by commenting out that out (and reloading networking) it's tough to get the IPv6 to go away. A "tunnel on/off" button in networking would be great, like using a VPN.

    Read the article

  • Endeca Information Discovery 3-Day Hands-on Training Boot-Camp

    - by Mike.Hallett(at)Oracle-BI&EPM
    For Oracle Partners, on October 15-17, 2012 in Paris, France: Register here. The Oracle Endeca Information Discovery (OEID) Boot-Camp is designed to give partners an understanding of OEID’s features, and how it complements the existing Oracle Business Intelligence suite. Participants will learn how to develop & implement solutions using a Data Discovery method.  Training is in English. What will be covered? The Oracle Endeca Information Discovery (OEID) Boot Camp is a three-day class with a combination of lecture and hands-on exercises, tailored to make participants aware of the Oracle Endeca Information Discovery platform, and to gain valuable skills for the implementation of projects.   Prerequisites You must bring a laptop with you for the Hands-on labs: Attendees should have experience and familiarity with the basic concepts of business intelligence and be OPN Partners with Gold or above membership.  This training is free to OPN Partners. Click here for more information. Where and When ? Monday, October 15th until Wednesday, October 17th included  9:00 - 18:00 Oracle France 15, boulevard Charles de Gaulle 92715 Colombes: Access Venue Map Register here  : NOTE there is a Limited number of seats, you will get confirmation within 2 weeks.

    Read the article

  • Implementation Specialist OPN Exam for OBI Suite 11g is Now LIVE

    - by Mike.Hallett(at)Oracle-BI&EPM
    The OPN specialisation Exam for implementation consultants of OBI Suite 11g is now live and ready for all partners.  You can now update your specialisation certification to the latest product version 11g for OBI: until recently, the accreditation had examined skills for OBI 10g. For more details see Oracle Business Intelligence Foundation Suite 11g Essentials (1Z1-591) where you can apply for this Oracle Implementation Specialist credential. This exam is primarily intended for consultants that are skilled in implementing solutions based on Oracle Business Intelligence Foundation Suite 11g. The certification covers skills such as: installing OBIEE, building the BI Server metadata repository, building BI dashboards, constructing ad hoc queries, defining security settings and configuring and managing cache files. The exam targets the intermediate-level implementation team member. Up-to-date training and field experience are recommended.   Also Note the New OPN on-line Sales & Pre-sales Assessment Tests are available @ Oracle Business Intelligence Foundation Suite 11g Sales Specialist Oracle Business Intelligence Foundation Suite 11g PreSales Specialist Oracle Business Intelligence Foundation Suite 11g Support Specialist FREE Certification Testing at OpenWorld ·       Are you attending OPN Exchange @ OpenWorld ? Then join us at OPN Specialist Test Fest! October 1st - 4th 2012, Marriott Marquis Hotel :  Pre-register here now!   For More Information OPN Certified Specialist exams OPN Certified Specialist FAQ Enablement 2.0 Get Specialized!

    Read the article

  • How to remove all associated files and configuration settings of an app installed through 'force architecture' command

    - by Mysterio
    A few weeks ago I installed a 32 bit .deb file through the 'force architecture' command (on my 64bit notebook), however the procedure was unsuccessful and I used the apt-get purgecommand to uninstall the app. It seems there are some leftovers of the app I uninstalled which has now broken system update. Synaptic recommended a sudo apt-get install -fwhich I did in the terminal with this initial response: Reading package lists... Done Building dependency tree Reading state information... Done The following package was automatically installed and is no longer required: libntfs10 Use 'apt-get autoremove' to remove them. The following packages will be REMOVED: crossplatformui 0 upgraded, 0 newly installed, 1 to remove and 0 not upgraded. 1 not fully installed or removed. After this operation, 0 B of additional disk space will be used. Do you want to continue [Y/n]? I chose 'Y' then got this response: (Reading database ... 187616 files and directories currently installed.) Removing crossplatformui ... ztemtvcdromd: no process found dpkg: error processing crossplatformui (--remove): subprocess installed post-removal script returned error exit status 1 Errors were encountered while processing: crossplatformui E: Sub-process /usr/bin/dpkg returned an error code (1) It seems the app I installed crossplatformuiis still on my system and has caused update manager to stop running with a partial upgrade warning. What do I do now?

    Read the article

  • Update get's stuck unpacking bad package, won't continue without it

    - by Shazzner
    Removing the package from cache, and disabling Recommended Updates in Software Sources gives me an error saying I need to install this package. I've tried to update several times, but it keeps hanging on unpacking the ubuntu-sso-client package. Which forces me to hard-reset to unlock the package manager. I've tried: sudo dpkg --configure -a No errors sudo apt-get upgrade --fix-broken Wants me to reinstall said package, resulting in it hanging Removing the package: sudo rm -f /var/cache/apt/archives/ubuntu-sso-client_1.0.8-0ubuntu1_all.deb Results in the same effect, it re-downloads then hangs I can de-select Recommended Updates but I get error messages when I try to update again: E: The package ubuntu-sso-client needs to be reinstalled, but I can't find an archive for it. Which won't let me continue Finally re-enabling the source, I try to remove ubuntu-sso sudo apt-get remove ubuntu-sso-client It removes a bunch of other packages but complains about the package: dpkg: error processing ubuntu-sso-client (--remove): Package is in a very bad inconsistent state - you should reinstall it before attempting a removal. Reinstalling ubuntu-sso-client hangs :( I'm at my wits end, any ideas? I would be nice to install all the other updates but this one is preventing it.

    Read the article

  • Advanced Exalytics, Endeca and OBI Training for Partners

    - by Mike.Hallett(at)Oracle-BI&EPM
    Normal 0 false false false EN-GB X-NONE X-NONE MicrosoftInternetExplorer4 In Sweden and in Saudi Arabia there will be the next set of hands-on advanced Exalytics, Endeca and OBI Training workshops for Partners: partners from any country may attend these. These are free of charge to OPN Specialised member partners, but are subject to availability (please do not attend unless you have received a confirmation from Oracle to do so). Check the workshop agendas and pre-requisites, and register from the links below: Exalytics and Business Intelligence Hands-on Workshop (3-days) August 24-26, 2013: Oracle Riyadh, Saudi Arabia September 3-5, 2013: Oracle Kista, Sweden Endeca Information Discovery Hands-on Workshop (3-days) October 22-24, 2013: Oracle Kista, Sweden Advanced Oracle Business Intelligence Hands-on Workshop (3-days) August 19-21, 2013: Oracle Riyadh, Saudi Arabia /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0cm; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}

    Read the article

  • "Unmet Dependencies" problem when trying apt-get install

    - by GChorn
    Anytime I try to install python packages using the command: sudo apt-get install python-package I get the following output: Reading package lists... Done Building dependency tree Reading state information... Done You might want to run 'apt-get -f install' to correct these: The following packages have unmet dependencies: linux-headers-generic : Depends: linux-headers-3.2.0-36-generic but it is not going to be installed linux-headers-generic-pae : Depends: linux-headers-3.2.0-36-generic-pae but it is not going to be installed linux-image-generic : Depends: linux-image-3.2.0-36-generic but it is not going to be installed E: Unmet dependencies. Try 'apt-get -f install' with no packages (or specify a solution). This seems to have started when these same three packages showed up in Ubuntu's Update Manager and kicked an error when I tried to install them there. Based on the suggestion in the output above, I tried running: sudo apt-get -f install But this only gave me several instances of the following error: dpkg: error processing /var/cache/apt/archives/linux-image-3.2.0-36-generic_3.2.0-36.57_i386.deb (--unpack): unable to create `/lib/modules/3.2.0-36-generic/kernel/drivers/net/wireless/ath/carl9170/carl9170.ko.dpkg-new' (while processing `./lib/modules/3.2.0-36-generic/kernel/drivers/net/wireless/ath/carl9170/carl9170.ko'): No space left on device Now maybe I'm way off-base here, but I'm wondering if the error could be coming from the "No space left on device" part? The thing is, I'm running Ubuntu as a VirtualBox VM but I've got it set to dynamically increase its virtual hard drive space as needed, so why am I still getting this error? Here's my output when I use dh -f: Filesystem Size Used Avail Use% Mounted on /dev/sda1 6.9G 5.7G 869M 88% / udev 494M 4.0K 494M 1% /dev tmpfs 201M 784K 200M 1% /run none 5.0M 0 5.0M 0% /run/lock none 501M 76K 501M 1% /run/shm VB_Shared_Folder 466G 271G 195G 59% /media/sf_VB_Shared_Folder When I perform sudo apt-get -f install and the system says, After this operation, 192 MB of additional disk space will be used. Does that mean 192 MB of my virtual machine's current memory, or 192 MB on top of the rest of my free space? As I said, my machine normally dynamically allocates additional memory from the host machine, so I don't see why there would be memory restrictions at all...

    Read the article

  • CBO????????

    - by Liu Maclean(???)
    ???Itpub????????CBO??????????, ????????: SQL> create table maclean1 as select * from dba_objects; Table created. SQL> update maclean1 set status='INVALID' where owner='MACLEAN'; 2 rows updated. SQL> commit; Commit complete. SQL> create index ind_maclean1 on maclean1(status); Index created. SQL> exec dbms_stats.gather_table_stats('SYS','MACLEAN1',cascade=>true); PL/SQL procedure successfully completed. SQL> explain plan for select * from maclean1 where status='INVALID'; Explained. SQL> set linesize 140 pagesize 1400 SQL> select * from table(dbms_xplan.display()); PLAN_TABLE_OUTPUT --------------------------------------------------------------------------- Plan hash value: 987568083 ------------------------------------------------------------------------------ | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | ------------------------------------------------------------------------------ | 0 | SELECT STATEMENT | | 11320 | 1028K| 85 (0)| 00:00:02 | |* 1 | TABLE ACCESS FULL| MACLEAN1 | 11320 | 1028K| 85 (0)| 00:00:02 | ------------------------------------------------------------------------------ Predicate Information (identified by operation id): --------------------------------------------------- 1 - filter("STATUS"='INVALID') 13 rows selected. 10053 trace Access path analysis for MACLEAN1 *************************************** SINGLE TABLE ACCESS PATH   Single Table Cardinality Estimation for MACLEAN1[MACLEAN1]   Column (#10): STATUS(     AvgLen: 7 NDV: 2 Nulls: 0 Density: 0.500000   Table: MACLEAN1  Alias: MACLEAN1     Card: Original: 22639.000000  Rounded: 11320  Computed: 11319.50  Non Adjusted: 11319.50   Access Path: TableScan     Cost:  85.33  Resp: 85.33  Degree: 0       Cost_io: 85.00  Cost_cpu: 11935345       Resp_io: 85.00  Resp_cpu: 11935345   Access Path: index (AllEqRange)     Index: IND_MACLEAN1     resc_io: 185.00  resc_cpu: 8449916     ix_sel: 0.500000  ix_sel_with_filters: 0.500000     Cost: 185.24  Resp: 185.24  Degree: 1   Best:: AccessPath: TableScan          Cost: 85.33  Degree: 1  Resp: 85.33  Card: 11319.50  Bytes: 0 ?????10053????????????,?????Density = 0.5 ?? 1/ NDV ??? ??????????????STATUS='INVALID"???????????, ????????????????? ????”STATUS”=’INVALID’ condition???2?,?status??????,??????dbms_stats?????????????,???CBO????INDEX Range ind_maclean1,???????,??????opitimizer?????? ?????????????????????????,????????,??????????status=’INVALID’???????card??,????????: [oracle@vrh4 ~]$ sqlplus / as sysdba SQL*Plus: Release 11.2.0.2.0 Production on Mon Oct 17 19:15:45 2011 Copyright (c) 1982, 2010, Oracle. All rights reserved. Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production With the Partitioning, OLAP, Data Mining and Real Application Testing options SQL> select * from v$version; BANNER -------------------------------------------------------------------------------- Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production PL/SQL Release 11.2.0.2.0 - Production CORE 11.2.0.2.0 Production TNS for Linux: Version 11.2.0.2.0 - Production NLSRTL Version 11.2.0.2.0 - Production SQL> show parameter optimizer_fea NAME TYPE VALUE ------------------------------------ ----------- ------------------------------ optimizer_features_enable string 11.2.0.2 SQL> select * from global_name; GLOBAL_NAME -------------------------------------------------------------------------------- www.oracledatabase12g.com & www.askmaclean.com SQL> drop table maclean; Table dropped. SQL> create table maclean as select * from dba_objects; Table created. SQL> update maclean set status='INVALID' where owner='MACLEAN'; 2 rows updated. SQL> commit; Commit complete. SQL> create index ind_maclean on maclean(status); Index created. SQL> exec dbms_stats.gather_table_stats('SYS','MACLEAN',cascade=>true, method_opt=>'FOR ALL COLUMNS SIZE 2'); PL/SQL procedure successfully completed. ???????2?bucket????, ??????????????? ???Quest???Guy Harrison???????FREQUENCY????????,??????: rem rem Generate a histogram of data distribution in a column as recorded rem in dba_tab_histograms rem rem Guy Harrison Jan 2010 : www.guyharrison.net rem rem hexstr function is from From http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:707586567563 set pagesize 10000 set lines 120 set verify off col char_value format a10 heading "Endpoint|value" col bucket_count format 99,999,999 heading "bucket|count" col pct format 999.99 heading "Pct" col pct_of_max format a62 heading "Pct of|Max value" rem col endpoint_value format 9999999999999 heading "endpoint|value" CREATE OR REPLACE FUNCTION hexstr (p_number IN NUMBER) RETURN VARCHAR2 AS l_str LONG := TO_CHAR (p_number, 'fm' || RPAD ('x', 50, 'x')); l_return VARCHAR2 (4000); BEGIN WHILE (l_str IS NOT NULL) LOOP l_return := l_return || CHR (TO_NUMBER (SUBSTR (l_str, 1, 2), 'xx')); l_str := SUBSTR (l_str, 3); END LOOP; RETURN (SUBSTR (l_return, 1, 6)); END; / WITH hist_data AS ( SELECT endpoint_value,endpoint_actual_value, NVL(LAG (endpoint_value) OVER (ORDER BY endpoint_value),' ') prev_value, endpoint_number, endpoint_number, endpoint_number - NVL (LAG (endpoint_number) OVER (ORDER BY endpoint_value), 0) bucket_count FROM dba_tab_histograms JOIN dba_tab_col_statistics USING (owner, table_name,column_name) WHERE owner = '&owner' AND table_name = '&table' AND column_name = '&column' AND histogram='FREQUENCY') SELECT nvl(endpoint_actual_value,endpoint_value) endpoint_value , bucket_count, ROUND(bucket_count*100/SUM(bucket_count) OVER(),2) PCT, RPAD(' ',ROUND(bucket_count*50/MAX(bucket_count) OVER()),'*') pct_of_max FROM hist_data; WITH hist_data AS ( SELECT endpoint_value,endpoint_actual_value, NVL(LAG (endpoint_value) OVER (ORDER BY endpoint_value),' ') prev_value, endpoint_number, endpoint_number, endpoint_number - NVL (LAG (endpoint_number) OVER (ORDER BY endpoint_value), 0) bucket_count FROM dba_tab_histograms JOIN dba_tab_col_statistics USING (owner, table_name,column_name) WHERE owner = '&owner' AND table_name = '&table' AND column_name = '&column' AND histogram='FREQUENCY') SELECT hexstr(endpoint_value) char_value, bucket_count, ROUND(bucket_count*100/SUM(bucket_count) OVER(),2) PCT, RPAD(' ',ROUND(bucket_count*50/MAX(bucket_count) OVER()),'*') pct_of_max FROM hist_data ORDER BY endpoint_value; ?????,??????????FREQUENCY?????: ??dbms_stats ?????STATUS=’INVALID’ bucket count=9 percent = 0.04 ,??????10053 trace????????: SQL> explain plan for select * from maclean where status='INVALID'; Explained. SQL>  select * from table(dbms_xplan.display()); PLAN_TABLE_OUTPUT ------------------------------------- Plan hash value: 3087014066 ------------------------------------------------------------------------------------------- | Id  | Operation                   | Name        | Rows  | Bytes | Cost (%CPU)| Time     | ------------------------------------------------------------------------------------------- |   0 | SELECT STATEMENT            |             |     9 |   837 |     2   (0)| 00:00:01 | |   1 |  TABLE ACCESS BY INDEX ROWID| MACLEAN     |     9 |   837 |     2   (0)| 00:00:01 | |*  2 |   INDEX RANGE SCAN          | IND_MACLEAN |     9 |       |     1   (0)| 00:00:01 | ------------------------------------------------------------------------------------------- Predicate Information (identified by operation id): ---------------------------------------------------    2 - access("STATUS"='INVALID') ??????????????CBO???????STATUS=’INVALID’?cardnality?? , ??????????? ,??index range scan??Full table scan? ????????????????10053 trace: SQL> alter system flush shared_pool; System altered. SQL> oradebug setmypid; Statement processed. SQL> oradebug event 10053 trace name context forever ,level 1; Statement processed. SQL> explain plan for select * from maclean where status='INVALID'; Explained. SINGLE TABLE ACCESS PATH Single Table Cardinality Estimation for MACLEAN[MACLEAN] Column (#10): NewDensity:0.000199, OldDensity:0.000022 BktCnt:22640, PopBktCnt:22640, PopValCnt:2, NDV:2 ???NewDensity= bucket_count / SUM(bucket_count) /2 Column (#10): STATUS( AvgLen: 7 NDV: 2 Nulls: 0 Density: 0.000199 Histogram: Freq #Bkts: 2 UncompBkts: 22640 EndPtVals: 2 Table: MACLEAN Alias: MACLEAN Card: Original: 22640.000000 Rounded: 9 Computed: 9.00 Non Adjusted: 9.00 Access Path: TableScan Cost: 85.30 Resp: 85.30 Degree: 0 Cost_io: 85.00 Cost_cpu: 10804625 Resp_io: 85.00 Resp_cpu: 10804625 Access Path: index (AllEqRange) Index: IND_MACLEAN resc_io: 2.00 resc_cpu: 20763 ix_sel: 0.000398 ix_sel_with_filters: 0.000398 Cost: 2.00 Resp: 2.00 Degree: 1 Best:: AccessPath: IndexRange Index: IND_MACLEAN Cost: 2.00 Degree: 1 Resp: 2.00 Card: 9.00 Bytes: 0 ???????????2 bucket?????CBO????????????,???????????????????,???dbms_stats.DEFAULT_METHOD_OPT????????????????????? ???dbms_stats?????????????????????col_usage$??????predicate???????,??col_usage$??<????????SMON??(?):??col_usage$????>? ??????????dbms_stats????????,col_usage$????????????predicate???,??dbms_stats??????????????????, ?: SQL> drop table maclean; Table dropped. SQL> create table maclean as select * from dba_objects; Table created. SQL> update maclean set status='INVALID' where owner='MACLEAN'; 2 rows updated. SQL> commit; Commit complete. SQL> create index ind_maclean on maclean(status); Index created. ??dbms_stats??method_opt??maclean? SQL> exec dbms_stats.gather_table_stats('SYS','MACLEAN'); PL/SQL procedure successfully completed. @histogram.sql Enter value for owner: SYS old  12:    WHERE owner = '&owner' new  12:    WHERE owner = 'SYS' Enter value for table: MACLEAN old  13:      AND table_name = '&table' new  13:      AND table_name = 'MACLEAN' Enter value for column: STATUS old  14:      AND column_name = '&column' new  14:      AND column_name = 'STATUS' no rows selected ????col_usage$?????,????????status????? declare begin for i in 1..500 loop execute immediate ' alter system flush shared_pool'; DBMS_STATS.FLUSH_DATABASE_MONITORING_INFO; execute immediate 'select count(*) from maclean where status=''INVALID'' ' ; end loop; end; / PL/SQL procedure successfully completed. SQL> select obj# from obj$ where name='MACLEAN';       OBJ# ----------      97215 SQL> select * from  col_usage$ where  OBJ#=97215;       OBJ#    INTCOL# EQUALITY_PREDS EQUIJOIN_PREDS NONEQUIJOIN_PREDS RANGE_PREDS LIKE_PREDS NULL_PREDS TIMESTAMP ---------- ---------- -------------- -------------- ----------------- ----------- ---------- ---------- ---------      97215          1              1              0                 0           0          0          0 17-OCT-11      97215         10            499              0                 0           0          0          0 17-OCT-11 SQL> exec dbms_stats.gather_table_stats('SYS','MACLEAN'); PL/SQL procedure successfully completed. @histogram.sql Enter value for owner: SYS Enter value for table: MACLEAN Enter value for column: STATUS Endpoint        bucket         Pct of value            count     Pct Max value ---------- ----------- ------- -------------------------------------------------------------- INVALI               2     .04 VALIC3           5,453   99.96  *************************************************

    Read the article

  • Big Data&rsquo;s Killer App&hellip;

    - by jean-pierre.dijcks
    Recently Keith spent  some time talking about the cloud on this blog and I will spare you my thoughts on the whole thing. What I do want to write down is something about the Big Data movement and what I think is the killer app for Big Data... Where is this coming from, ok, I confess... I spent 3 days in cloud land at the Cloud Connect conference in Santa Clara and it was quite a lot of fun. One of the nice things at Cloud Connect was that there was a track dedicated to Big Data, which prompted me to some extend to write this post. What is Big Data anyways? The most valuable point made in the Big Data track was that Big Data in itself is not very cool. Doing something with Big Data is what makes all of this cool and interesting to a business user! The other good insight I got was that a lot of people think Big Data means a single gigantic monolithic system holding gazillions of bytes or documents or log files. Well turns out that most people in the Big Data track are talking about a lot of collections of smaller data sets. So rather than thinking "big = monolithic" you should be thinking "big = many data sets". This is more than just theoretical, it is actually relevant when thinking about big data and how to process it. It is important because it means that the platform that stores data will most likely consist out of multiple solutions. You may be storing logs on something like HDFS, you may store your customer information in Oracle and you may store distilled clickstream information in some distilled form in MySQL. The big question you will need to solve is not what lives where, but how to get it all together and get some value out of all that data. NoSQL and MapReduce Nope, sorry, this is not the killer app... and no I'm not saying this because my business card says Oracle and I'm therefore biased. I think language is important, but as with storage I think pragmatic is better. In other words, some questions can be answered with SQL very efficiently, others can be answered with PERL or TCL others with MR. History should teach us that anyone trying to solve a problem will use any and all tools around. For example, most data warehouses (Big Data 1.0?) get a lot of data in flat files. Everyone then runs a bunch of shell scripts to massage or verify those files and then shoves those files into the database. We've even built shell script support into external tables to allow for this. I think the Big Data projects will do the same. Some people will use MapReduce, although I would argue that things like Cascading are more interesting, some people will use Java. Some data is stored on HDFS making Cascading the way to go, some data is stored in Oracle and SQL does do a good job there. As with storage and with history, be pragmatic and use what fits and neither NoSQL nor MR will be the one and only. Also, a language, while important, does in itself not deliver business value. So while cool it is not a killer app... Vertical Behavioral Analytics This is the killer app! And you are now thinking: "what does that mean?" Let's decompose that heading. First of all, analytics. I would think you had guessed by now that this is really what I'm after, and of course you are right. But not just analytics, which has a very large scope and means many things to many people. I'm not just after Business Intelligence (analytics 1.0?) or data mining (analytics 2.0?) but I'm after something more interesting that you can only do after collecting large volumes of specific data. That all important data is about behavior. What do my customers do? More importantly why do they behave like that? If you can figure that out, you can tailor web sites, stores, products etc. to that behavior and figure out how to be successful. Today's behavior that is somewhat easily tracked is web site clicks, search patterns and all of those things that a web site or web server tracks. that is where the Big Data lives and where these patters are now emerging. Other examples however are emerging, and one of the examples used at the conference was about prediction churn for a telco based on the social network its members are a part of. That social network is not about LinkedIn or Facebook, but about who calls whom. I call you a lot, you switch provider, and I might/will switch too. And that just naturally brings me to the next word, vertical. Vertical in this context means per industry, e.g. communications or retail or government or any other vertical. The reason for being more specific than just behavioral analytics is that each industry has its own data sources, has its own quirky logic and has its own demands and priorities. Of course, the methods and some of the software will be common and some will have both retail and service industry analytics in place (your corner coffee store for example). But the gist of it all is that analytics that can predict customer behavior for a specific focused group of people in a specific industry is what makes Big Data interesting. Building a Vertical Behavioral Analysis System Well, that is going to be interesting. I have not seen much going on in that space and if I had to have some criticism on the cloud connect conference it would be the lack of concrete user cases on big data. The telco example, while a step into the vertical behavioral part is not really on big data. It used a sample of data from the customers' data warehouse. One thing I do think, and this is where I think parts of the NoSQL stuff come from, is that we will be doing this analysis where the data is. Over the past 10 years we at Oracle have called this in-database analytics. I guess we were (too) early? Now the entire market is going there including companies like SAS. In-place btw does not mean "no data movement at all", what it means that you will do this on data's permanent home. For SAS that is kind of the current problem. Most of the inputs live in a data warehouse. So why move it into SAS and back? That all worked with 1 TB data warehouses, but when we are looking at 100TB to 500 TB of distilled data... Comments? As it is still early days with these systems, I'm very interested in seeing reactions and thoughts to some of these thoughts...

    Read the article

  • Who is Jeremiah Owyang?

    - by Michael Hylton
    12.00 Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman";} Q: What’s your current role and what career path brought you here? J.O.: I'm currently a partner and one of the founding team members at Altimeter Group.  I'm currently the Research Director, as well as wear the hat of Industry Analyst. Prior to joining Altimeter, I was an Industry Analyst at Forrester covering Social Computing, and before that, deployed and managed the social media program at Hitachi Data Systems in Santa Clara.  Around that time, I started a career blog called Web Strategy which focused on how companies were using the web to connect with customers --and never looked back. Q: As an industry analyst, what are you focused on these days? J.O.: There are three trends that I'm focused my research on at this time:  1) The Dynamic Customer Journey:  Individuals (both b2c and b2b) are given so many options in their sources of data, channels to choose from and screens to consume them on that we've found that at each given touchpoint there are 75 potential permutations.  Companies that can map this, then deliver information to individuals when they need it will have a competitive advantage and we want to find out who's doing this.  2) One of the sub themes that supports this trend is Social Performance.  Yesterday's social web was disparate engagement of humans, but the next phase will be data driven, and soon new technologies will emerge to help all those that are consuming, publishing, and engaging on the social web to be more efficient with their time through forms of automation.  As you might expect, this comes with upsides and downsides.  3) The Sentient World is our research theme that looks out the furthest as the world around us (even inanimate objects) become 'self aware' and are able to talk back to us via digital devices and beyond.  Big data, internet of things, mobile devices will all be this next set. Q: People cite that the line between work and life is getting more and more blurred. Do you see your personal life influencing your professional work? J.O.: The lines between our work and personal lives are dissolving, and this leads to a greater upside of being always connected and have deeper relationships with those that are not.  It also means a downside of society expectations that we're always around and available for colleagues, customers, and beyond.  In the future, a balance will be sought as we seek to achieve the goals of family, friends, work, and our own personal desires.  All of this is being ironically written at 430 am on a Sunday am.  Q: How can people keep up with what you’re working on? J.O.: A great question, thanks.  There are a few sources of information to find out, I'll lead with the first which is my blog at web-strategist.com.  A few times a week I'll publish my industry insights (hires, trends, forces, funding, M&A, business needs) as well as on twitter where I'll point to all the news that's fit to print @jowyang.  As my research reports go live (we publish them for all to read --called Open Research-- at no cost) they'll emerge on my blog, or checkout the research tab to find out more now.  http://www.web-strategist.com/blog/research/ Q: Recently, you’ve been working with us here at Oracle on something exciting coming up later this week. What’s on the horizon?  J.O.: Absolutely! This coming Thursday, September 13th, I’m doing a webcast with Oracle on “Managing Social Relationships for the Enterprise”. This is going to be a great discussion with Reggie Bradford, Senior Vice President of Product Development at Oracle and Christian Finn, Senior Director of Product Management for Oracle WebCenter. I’m looking forward to a great discussion around all those issues that so many companies are struggling with these days as they realize how much social media is impacting their business. It’s changing the way your customers and employees interact with your brand. Today it’s no longer a matter of when to become a social-enabled enterprise, but how to become a successful one. 12.00 Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman";} Q: You’ve been very actively pursued for media interviews and conference and company speaking engagements – anything you’d like to share to give us a sneak peak of what to expect on Thursday’s webcast?  J.O.: Below is a 15 minute video which encapsulates Altimeter’s themes on the Dynamic Customer Journey and the Sentient World. I’m really proud to have taken an active role in the first ever LeWeb outside of Paris. This one, which was featured in downtown London across the street from Westminster Abbey was sold out. If you’ve not heard of LeWeb, this is a global Internet conference hosted by Loic and Geraldine Le Meur, a power couple that stem from Paris but are also living in Silicon Valley, this is one of my favorite conferences to connect with brands, technology innovators, investors and friends. Altimeter was able to play a minor role in suggesting the theme for the event “Faster Than Real Time” which stems off previous LeWebs that focused on the “Real time web”. In this radical state, companies are able to anticipate the needs of their customers by using data, technology, and devices and deliver meaningful experiences before customers even know they need it. I explore two of three of Altimeter’s research themes, the Dynamic Customer Journey, and the Sentient World in my speech, but due to time, did not focus on Adaptive Organization.

    Read the article

  • Extending Database-as-a-Service to Provision Databases with Application Data

    - by Nilesh A
    Oracle Enterprise Manager 12c Database as a Service (DBaaS) empowers Self Service/SSA Users to rapidly spawn databases on demand in cloud. The configuration and structure of provisioned databases depends on respective service template selected by Self Service user while requesting for database. In EM12c, the DBaaS Self Service/SSA Administrator has the option of hosting various service templates in service catalog and based on underlying DBCA templates.Many times provisioned databases require production scale data either for UAT, testing or development purpose and managing DBCA templates with data can be unwieldy. So, we need to populate the database using post deployment script option and without any additional work for the SSA Users. The SSA Administrator can automate this task in few easy steps. For details on how to setup DBaaS Self Service Portal refer to the DBaaS CookbookIn this article, I will list steps required to enable EM 12c DBaaS to provision databases with application data in two distinct ways using: 1) Data pump 2) Transportable tablespaces (TTS). The steps listed below are just examples of how to extend EM 12c DBaaS and you can even have your own method plugged in part of post deployment script option. Using Data Pump to populate databases These are the steps to be followed to implement extending DBaaS using Data Pump methodolgy: Production DBA should run data pump export on the production database and make the dump file available to all the servers participating in the database zone [sample shown in Fig.1] -- Full exportexpdp FULL=y DUMPFILE=data_pump_dir:dpfull1%U.dmp, data_pump_dir:dpfull2%U.dmp PARALLEL=4 LOGFILE=data_pump_dir:dpexpfull.log JOB_NAME=dpexpfull Figure-1:  Full export of database using data pump Create a post deployment SQL script [sample shown in Fig. 2] and this script can either be uploaded into the software library by SSA Administrator or made available on a shared location accessible from servers where databases are likely to be provisioned Normal 0 -- Full importdeclare    h1   NUMBER;begin-- Creating the directory object where source database dump is backed up.    execute immediate 'create directory DEST_LOC as''/scratch/nagrawal/OracleHomes/oradata/INITCHNG/datafile''';-- Running import    h1 := dbms_datapump.open (operation => 'IMPORT', job_mode => 'FULL', job_name => 'DB_IMPORT10');    dbms_datapump.set_parallel(handle => h1, degree => 1);    dbms_datapump.add_file(handle => h1, filename => 'IMP_GRIDDB_FULL.LOG', directory => 'DATA_PUMP_DIR', filetype => 3);    dbms_datapump.add_file(handle => h1, filename => 'EXP_GRIDDB_FULL_%U.DMP', directory => 'DEST_LOC', filetype => 1);    dbms_datapump.start_job(handle => h1);    dbms_datapump.detach(handle => h1);end;/ Figure-2: Importing using data pump pl/sql procedures Using DBCA, create a template for the production database – include all the init.ora parameters, tablespaces, datafiles & their sizes SSA Administrator should customize “Create Database Deployment Procedure” and provide DBCA template created in the previous step. In “Additional Configuration Options” step of Customize “Create Database Deployment Procedure” flow, provide the name of the SQL script in the Custom Script section and lock the input (shown in Fig. 3). Continue saving the deployment procedure. Figure-3: Using Custom script option for calling Import SQL Now, an SSA user can login to Self Service Portal and use the flow to provision a database that will also  populate the data using the post deployment step. Using Transportable tablespaces to populate databases Copy of all user/application tablespaces will enable this method of populating databases. These are the required steps to extend DBaaS using transportable tablespaces: Production DBA needs to create a backup of tablespaces. Datafiles may need conversion [such as from Big Endian to Little Endian or vice versa] based on the platform of production and destination where DBaaS created the test database. Here is sample backup script shows how to find out if any conversion is required, describes the steps required to convert datafiles and backup tablespace. SSA Administrator should copy the database (tablespaces) backup datafiles and export dumps to the backup location accessible from the hosts participating in the database zone(s). Create a post deployment SQL script and this script can either be uploaded into the software library by SSA Administrator or made available on a shared location accessible from servers where databases are likely to be provisioned. Here is sample post deployment SQL script using transportable tablespaces. Using DBCA, create a template for the production database – all the init.ora parameters should be included. NOTE: DO NOT choose to bring tablespace data into this template as they will be created SSA Administrator should customize “Create Database Deployment Procedure” and provide DBCA template created in the previous step. In the “Additional Configuration Options” step of the flow, provide the name of the SQL script in the Custom Script section and lock the input. Continue saving the deployment procedure. Now, an SSA user can login to Self Service Portal and use the flow to provision a database that will also populate the data using the post deployment step. More Information: Database-as-a-Service on Exadata Cloud Podcast on Database as a Service using Oracle Enterprise Manager 12c Oracle Enterprise Manager 12c Installation and Administration guide, Cloud Administration guide DBaaS Cookbook Screenwatch: Private Database Cloud: Set Up the Cloud Self-Service Portal Screenwatch: Private Database Cloud: Use the Cloud Self-Service Portal Stay Connected: Twitter |  Face book |  You Tube |  Linked in |  Newsletter

    Read the article

  • How do I restrict concurrent statistics gathering to a small set of tables from a single schema?

    - by Maria Colgan
    I got an interesting question from one of my colleagues in the performance team last week about how to restrict a concurrent statistics gather to a small subset of tables from one schema, rather than the entire schema. I thought I would share the solution we came up with because it was rather elegant, and took advantage of concurrent statistics gathering, incremental statistics, and the not so well known “obj_filter_list” parameter in DBMS_STATS.GATHER_SCHEMA_STATS procedure. You should note that the solution outline below with “obj_filter_list” still applies, even when concurrent statistics gathering and/or incremental statistics gathering is disabled. The reason my colleague had asked the question in the first place was because he wanted to enable incremental statistics for 5 large partitioned tables in one schema. The first time you gather statistics after you enable incremental statistics on a table, you have to gather statistics for all of the existing partitions so that a synopsis may be created for them. If the partitioned table in question is large and contains a lot of partition, this could take a considerable amount of time. Since my colleague only had the Exadata environment at his disposal overnight, he wanted to re-gather statistics on 5 partition tables as quickly as possible to ensure that it all finished before morning. Prior to Oracle Database 11g Release 2, the only way to do this would have been to write a script with an individual DBMS_STATS.GATHER_TABLE_STATS command for each partition, in each of the 5 tables, as well as another one to gather global statistics on the table. Then, run each script in a separate session and manually manage how many of this session could run concurrently. Since each table has over one thousand partitions that would definitely be a daunting task and would most likely keep my colleague up all night! In Oracle Database 11g Release 2 we can take advantage of concurrent statistics gathering, which enables us to gather statistics on multiple tables in a schema (or database), and multiple (sub)partitions within a table concurrently. By using concurrent statistics gathering we no longer have to run individual statistics gathering commands for each partition. Oracle will automatically create a statistics gathering job for each partition, and one for the global statistics on each partitioned table. With the use of concurrent statistics, our script can now be simplified to just five DBMS_STATS.GATHER_TABLE_STATS commands, one for each table. This approach would work just fine but we really wanted to get this down to just one command. So how can we do that? You may be wondering why we didn’t just use the DBMS_STATS.GATHER_SCHEMA_STATS procedure with the OPTION parameter set to ‘GATHER STALE’. Unfortunately the statistics on the 5 partitioned tables were not stale and enabling incremental statistics does not mark the existing statistics stale. Plus how would we limit the schema statistics gather to just the 5 partitioned tables? So we went to ask one of the statistics developers if there was an alternative way. The developer told us the advantage of the “obj_filter_list” parameter in DBMS_STATS.GATHER_SCHEMA_STATS procedure. The “obj_filter_list” parameter allows you to specify a list of objects that you want to gather statistics on within a schema or database. The parameter takes a collection of type DBMS_STATS.OBJECTTAB. Each entry in the collection has 5 feilds; the schema name or the object owner, the object type (i.e., ‘TABLE’ or ‘INDEX’), object name, partition name, and subpartition name. You don't have to specify all five fields for each entry. Empty fields in an entry are treated as if it is a wildcard field (similar to ‘*’ character in LIKE predicates). Each entry corresponds to one set of filter conditions on the objects. If you have more than one entry, an object is qualified for statistics gathering as long as it satisfies the filter conditions in one entry. You first must create the collection of objects, and then gather statistics for the specified collection. It’s probably easier to explain this with an example. I’m using the SH sample schema but needed a couple of additional partitioned table tables to get recreate my colleagues scenario of 5 partitioned tables. So I created SALES2, SALES3, and COSTS2 as copies of the SALES and COSTS table respectively (setup.sql). I also deleted statistics on all of the tables in the SH schema beforehand to more easily demonstrate our approach. Step 0. Delete the statistics on the tables in the SH schema. Step 1. Enable concurrent statistics gathering. Remember, this has to be done at the global level. Step 2. Enable incremental statistics for the 5 partitioned tables. Step 3. Create the DBMS_STATS.OBJECTTAB and pass it to the DBMS_STATS.GATHER_SCHEMA_STATS command. Here, you will notice that we defined two variables of DBMS_STATS.OBJECTTAB type. The first, filter_lst, will be used to pass the list of tables we want to gather statistics on, and will be the value passed to the obj_filter_list parameter. The second, obj_lst, will be used to capture the list of tables that have had statistics gathered on them by this command, and will be the value passed to the objlist parameter. In Oracle Database 11g Release 2, you need to specify the objlist parameter in order to get the obj_filter_list parameter to work correctly due to bug 14539274. Will also needed to define the number of objects we would supply in the obj_filter_list. In our case we ere specifying 5 tables (filter_lst.extend(5)). Finally, we need to specify the owner name and object name for each of the objects in the list. Once the list definition is complete we can issue the DBMS_STATS.GATHER_SCHEMA_STATS command. Step 4. Confirm statistics were gathered on the 5 partitioned tables. Here are a couple of other things to keep in mind when specifying the entries for the  obj_filter_list parameter. If a field in the entry is empty, i.e., null, it means there is no condition on this field. In the above example , suppose you remove the statement Obj_filter_lst(1).ownname := ‘SH’; You will get the same result since when you have specified gather_schema_stats so there is no need to further specify ownname in the obj_filter_lst. All of the names in the entry are normalized, i.e., uppercased if they are not double quoted. So in the above example, it is OK to use Obj_filter_lst(1).objname := ‘sales’;. However if you have a table called ‘MyTab’ instead of ‘MYTAB’, then you need to specify Obj_filter_lst(1).objname := ‘”MyTab”’; As I said before, although we have illustrated the usage of the obj_filter_list parameter for partitioned tables, with concurrent and incremental statistics gathering turned on, the obj_filter_list parameter is generally applicable to any gather_database_stats, gather_dictionary_stats and gather_schema_stats command. You can get a copy of the script I used to generate this post here. +Maria Colgan

    Read the article

  • 3 Trends for SMBs around Social, Mobile, and Sensor

    - by Socially_Aware_Enterprise
    While I often am talking to big companies or discussing enterprise solutions. There are times when individuals ask me about Small or Medium sized business trends.  Interestingly,  the Enterprise Social, Mobile, and Sensor initiatives I regularly discuss are in fact related to even the Mom and Pop storefront. The eco-system of new service players in the Social-Mobile-Sensor space generally emerge developing partnerships with enterprises as they develop and bring economy to scale to their services for the larger market. And of course Oracle has an entire division dedicated for delivering products and support to help emerging companies compete without the need to open an industrial strength credit line.. So here are some trends that we are helping large enterprises to deploy today, but small and medium businesses should be able to take advantage of by the end of this year and starting into 2015. 1) The typical small business is generally "Localized". But the ability to be "Hyper-Localized" will come as location based services become ubiquitous. Many small businesses have one or several storefronts and theirs are typically within a single regional economic footprint. While the internet provides global reach, it will be the businesses that invest in social, mobile and local that will win in the end.  Of course I am a huge SoMoLo evangelist. The SMBs' content and targeting with platforms for Geo-Fencing, Geo-Conquesting and Path-Matching to HHI are all going to be accessible to them, if not for Mobile Apps, then via Mobile messaging in Social Networks that offer it.. Expect to be able to target FaceBook messaging not by city, but by store or mall… This makes being able to be "Hyper-Local" even more important. And with new proximity services coming online more than ever before, SMBs will operate and service customers with pinpoint accuracy right down to where they stand in an aisle. Geo-Conquesting will be huge for small players to place ads when customers pass through competitors regions. Car Dealers are doing this now.. But also of course iBeacons are now very cheap and getting easier to put in retail stores. The ability for sales to happen anywhere in the store via a mobile phone or tablet is huge, as it will give the small shop the flexibility to not have to "Guard the Register" as more or most transactions will be digital. Thus, M-Commerce and T-Commerce will change the job of cashier dramatically.. 2) Intra-Brand Advocacy, the idea now is that rather than just depend on your trusty social media manager and his team, you are going to push more and more individuals with expertise inside the organization to help manage, reach-out, and utilize social channels to manage the incoming questions and answers customers need. While for years CRM was the tool of the enterprise, today CRMs enable this now "Salesforce et al" capability to trickle throughout the company. This gives greater pressure to organize roles, but also flatten out the organization. Internal collaboration around topics and customer needs is going to be the key for SMBs to finally get serious about customer experiences. Their customers are online and in social networks. This includes not just B2C SMBs but also B2B companies as well. Don't believe me? To find the players just use hashtag #SocialSelling and you will see… 3) The Visual Networks will begin to move from Content Aggregators to Content Collaboration platforms, which means Pinterest, Instagram, Vine, & others will begin to move to add more features brands want, first marketing platforms, rather than unique brand partnerships as they do today, but this will open ways for SMBs to engage with clear brand messaging and metrics. Eventually providing more "Collaboration" between Brand and Consumer.. Don't think for a minute Facebook bought Oculus Rift so you could see your timeline in 3-D. The Social Networks I advise customers to invest in are ones that are audio and visual intrinsically. Players from SoundCloud to Pinterest are deploying ways for brands to harness their interactive visual or audio based social networks to sell ad units aka brand messaging. While the Social Media revolution is going on, the emphasis was on the social, today it more and more about the media in social, that enterprises soon small and medium businesses will be connected to. 

    Read the article

  • Knowing your user is key--Part 1: Motivation

    - by erikanollwebb
    I was thinking where the best place to start in this blog would be and finally came back to a theme that I think is pretty critical--successful gamification in the enterprise comes down to knowing your user.  Lots of folks will say that gamification is about understanding that everyone is a gamer.  But at least in my org, that argument won't play for a lot of people.  Pun intentional.  It's not that I don't see the attraction to the idea--really, very few people play no games at all.  If they don't play video games, they might play solitaire on their computer.  They may play card games, or some type of sport.  Mario Herger has some great facts on how much game playing there is going on at his Enterprise-Gamification.com website. But at the end of the day, I can't sell that into my organization well.  We are Oracle.  We make big, serious software designed run your whole business.  We don't make Angry Birds out of your financial reporting tools.  So I stick with the argument that works better.  Gamification techniques are really just good principals of user experience packaged a little differently.  Feedback?  We already know feedback is important when using software.  Progress indicators?  Got that too.  Game mechanics may package things in a more explicit way but it's not really "new".  To know how to use game mechanics, and what a user experience team is important for, is totally understanding who our users are and what they are motivated by. For several years, I taught college psychology courses, including Motivation.  Motivation is generally broken down into intrinsic and extrinsic motivation.  There's intrinsic, which comes from within the individual.  And there's extrinsic, which comes from outside the individual.  Intrinsic motivation is that motivation that comes from just a general sense of pleasure in the doing of something.  For example, I like to cook.  I like to cook a lot.  The kind of cooking I think is just fun makes other people--people who don't like to cook--cringe.  Like the cake I made this week--the star-spangled rhapsody from The Cake Bible: two layers of meringue, two layers of genoise flavored with a raspberry eau de vie syrup, whipped cream with berries and a mousseline buttercream, also flavored with raspberry liqueur and topped with fresh raspberries and blueberries. I love cooking--I ask for cooking tools for my birthday and Christmas, I take classes like sushi making and knife skills for fun.  I like reading about you can make an emulsion of egg yolks, melted butter and lemon, cook slowly and transform them into a sauce hollandaise (my use of all the egg yolks that didn't go into the aforementioned cake).  And while it's nice when people like what I cook, I don't do it for that.  I do it because I think it's fun.  My former boss, Ultan Ó Broin, loves to fish in the sea off the coast of Ireland.  Not because he gets prizes for it, or awards, but because it's fun.  To quote a note he sent me today when I asked if having been recently ill kept him from the beginning of mackerel season, he told me he had already been out and said "I can fish when on a deathbed" (read more of Ultan's work, see his blogs on User Assistance and Translation.). That's not the kind of intensity you get about something you don't like to do.  I'm sure you can think of something you do just because you like it. So how does that relate to gamification?  Gamification in the enterprise space is about uncovering the game within work.  Gamification is about tapping into things people already find motivating.  But to do that, you need to know what that user is motivated by. Customer Relationship Management (CRM) is one of those areas where over-the-top gamification seems to work (not to plug a competitor in this space, but you can search on what Bunchball* has done with a company just a little north of us on 101 for the CRM crowd).  Sales people are naturally competitive and thrive on that plus recognition of their sales work.  You can use lots of game mechanics like leaderboards and challenges and scorecards with this type of user and they love it.  Show my whole org I'm leading in sales for the quarter?  Bring it on!  However, take the average accountant and show how much general ledger activity they have done in the last week and expose it to their whole org on a leaderboard and I think you'd see a lot of people looking for a new job.  Why?  Because in general, accountants aren't extraverts who thrive on competition in their work.  That doesn't mean there aren't game mechanics that would work for them, but they won't be the same game mechanics that work for sales people.  It's a different type of user and they are motivated by different things. To break this up, I'll stop here and post now.  I'll pick this thread up in the next post. Thoughts? Questions? *Disclosure: To my knowledge, Oracle has no relationship with Bunchball at this point in time.

    Read the article

< Previous Page | 301 302 303 304 305 306 307 308 309 310 311 312  | Next Page >