Search Results

Search found 21089 results on 844 pages for 'virtual memory'.

Page 773/844 | < Previous Page | 769 770 771 772 773 774 775 776 777 778 779 780  | Next Page >

  • Installation problem Ubuntu 12.04 Crashing hardware error

    - by user93640
    I am running on Ubuntu 8.04 for quite some time without many problems. About almost a year ago or so I have been trying to upgrade to 10.04 LTS, but without any success. Each time when trying to upgrade or even newly install the installation process crashed after about an hour or so (I forgot exactly how long). Now I wanted to try Ubuntu 12.04 (not even installing, but I only selected "Try Ubuntu without installing") and I got similar errors. I did not try to install it, because of earlier experience with 10.04 when after I also lost 8.04 and had to install from scratch again (after which it worked). I get the following screen (as I am not allowed to upload photos here the text): 26.767262] [Hardware Error]: CPU 0: Machine Check Exception: 0 Bank 5: b200001804000e0f 26.767279] [Hardware Error]: TSC 0 26.767287] [Hardware Error]: PROCESSOR 0:6f6 TIME 1349017924 SOCKET 0 APIC 0 microcode 44 26.767297] [Hardware Error]: Run the above through 'mcelog --ascii' 26.767307] [Hardware Error]: CPU 1: Machine Check Exception: 0 Bank 1: b200000000000175 26.767316] [Hardware Error]: TSC 0 26.767323] [Hardware Error]: PROCESSOR 0:6f6 TIME 1349017924 SOCKET 0 APIC 1 microcode 44 26.767331] [Hardware Error]: Run the above through 'mcelog --ascii' 26.767339] [Hardware Error]: CPU 1: Machine Check Exception: 0 Bank 5: b200003000000e0f 26.767348] [Hardware Error]: TSC 0 26.767354] [Hardware Error]: PROCESSOR 0:6f6 TIME 1349017924 SOCKET 0 APIC 1 microcode 44 26.767363] [Hardware Error]: Run the above through 'mcelog --ascii' 26.767371] [Hardware Error]: CPU 1: Machine Check Exception: 4 Bank 1: b200000000000175 26.767379] [Hardware Error]: TSC 1bf231e65f 26.767386] [Hardware Error]: PROCESSOR 0:6f6 TIME 1349017951 SOCKET 0 APIC 1 microcode 44 26.767395] [Hardware Error]: Run the above through 'mcelog --ascii' 26.767403] [Hardware Error]: CPU 1: Machine Check Exception: 4 Bank 5: b200003008000e0f 26.767413] [Hardware Error]: TSC 1bf231e65f 26.767421] [Hardware Error]: PROCESSOR 0:6f6 TIME 1349017951 SOCKET 0 APIC 1 microcode 44 26.767429] [Hardware Error]: Run the above through 'mcelog --ascii' 26.767437] [Hardware Error]: CPU 0: Machine Check Exception: 5 Bank 5: b200001806000e0f 26.767447] [Hardware Error]: RIP |INEXACT| 60:<00000000c1018b5c> {mwait_idle+0x7c/0x1d0} 26.767464] [Hardware Error]: TSC 1bf231e674 26.767471] [Hardware Error]: PROCESSOR 0:6f6 TIME 1349017951 SOCKET 0 APIC 0 microcode 44 26.767480] [Hardware Error]: Run the above through 'mcelog --ascii' 26.767487] [Hardware Error]: Machine check: Processor context corrupt 26.767495] Kernel panic - not syncing: Fatal Machine check 26.767505] Pid: 579, comm: debconf-communi Tainted: G M 3.2.0.29-generic-pae #46-Ubuntu 26.767515] Call Trace: 26.767525] [<c158f812>] ? printk+0x2d/0x2f 26.767534] [<c158f6e0>] panic+0x5c/0x161 26.767542] [<c10247ef>] mce_panic.part.14+0x13f/0x170 26.767551] [<c1024872>] mce_panic+0x52/0x90 26.767558] [<c1024a18>] mce_reign+0x168/0x170 26.767565] [<c1024bb5>] mce_end+0x105/0x110 26.767572] [<c10252db>] do_machine_check+0x32b/0x4f0 26.767581] [<c1024fb0>] ? mce_log+0x120/0x120 26.767590] [<c15a5e47>] error_code+0x67/0x6c 26.767602] panic occurred, switching back to text console 26.768498] Rebooting in 30 seconds.. For information, I have also tried earlier Arch Linux. I can install it, but when I try to install a window manager (LXDE) again I got similar errors. Fedora also crashes when installing and also Mandriva did not work for me. Therefore I think something deep in the machine might be wrong. But as stated above I can (clean) install 8.04 and also 9.10 can be installed without problems. Also updates for 8.04 can be installed. My machine is dual boot with XP next to it on a different partition. My HW: Memory : 2.0 GiB; Processor 0: Intel(R) Core(TM)2 CPU 6320 @ 1.86GHz; Processor 1: Intel(R) Core(TM)2 CPU 6320 @ 1.86GHz; How can I install Ubuntu 12.04? Last option would be to completely format my machine and install everything from scratch, but even I am not sure if that would solve it in the end. Can anybody help me out?

    Read the article

  • wcf http 504: Working on a mystery

    - by James Fleming
    Ok,  So you're here because you've been trying to solve the mystery of why you're getting a 504 error. If you've made it to this lonely corner of the Internet, then the advice you're getting from other bloggers isn't the answer you are after. It wasn't the answer I needed either, so once I did solve my problem, I thought I'd share the answer with you. For starters, if by some miracle, you landed here first you may not already know that the 504 error is NOT coming from IIS or Casini, that response code is coming from Fiddler. HTTP/1.1 504 Fiddler - Receive Failure Content-Type: text/html Connection: close Timestamp: 09:43:05.193 ReadResponse() failed: The server did not return a response for this request.       The take away here is Fiddler won't help you with the diagnosis and any further digging in that direction is a red herring. Assuming you've dug around a bit, you may have arrived at posts which suggest you may be getting the error because you're trying to hump too much data over the wire, and have an urgent need to employ an anti-pattern: due to a special case: http://delphimike.blogspot.com/2010/01/error-504-in-wcfnet-35.html Or perhaps you're experiencing wonky behavior using WCF-CustomIsolated Adapter on Windows Server 2008 64bit environment, in which case the rather fly MVP Dwight Goins' advice is what you need. http://dgoins.wordpress.com/2009/12/18/64bit-wcf-custom-isolated-%E2%80%93-rest-%E2%80%93-%E2%80%9C504%E2%80%9D-response/ For me, none of that was helpful. I could perform a get on a single record  http://localhost:8783/Criterion/Skip(0)/Take(1) but I couldn't get more than one record in my collection as in:  http://localhost:8783/Criterion/Skip(0)/Take(2) I didn't have a big payload, or a large number of objects (as you can see by the size of one record below) - - A-1B f5abd850-ec52-401a-8bac-bcea22c74138 .biological/legal mother This item refers to the supervisor’s evaluation of the caseworker’s ability to involve the biological/legal mother in the permanency planning process. 75d8ecb7-91df-475f-aa17-26367aeb8b21 false true Admin account 2010-01-06T17:58:24.88 1.20 764a2333-f445-4793-b54d-1c3084116daa So while I was able to retrieve one record without a hitch (thus the record above) I wasn't able to return multiple records. I confirmed I could get each record individually, (Skip(1)/Take(1))so it stood to reason the problem wasn't with the data at all, so I suspected a serialization error. The first step to resolving this was to enable WCF Tracing. Instructions on how to set it up are here: http://msdn.microsoft.com/en-us/library/ms733025.aspx. The tracing log led me to the solution. The use of type 'Application.Survey.Model.Criterion' as a get-only collection is not supported with NetDataContractSerializer.  Consider marking the type with the CollectionDataContractAttribute attribute or the SerializableAttribute attribute or adding a setter to the property. So I was wrong (but close!). The problem was a deserializing issue in trying to recreate my read only collection. http://msdn.microsoft.com/en-us/library/aa347850.aspx#Y1455 So looking at my underlying model, I saw I did have a read only collection. Adding a setter was all it took.         public virtual ICollection<string> GoverningResponses         {             get             {                 if (!string.IsNullOrEmpty(GoverningResponse))                 {                     return GoverningResponse.Split(';');                 }                 else                     return null;             }                  } Hope this helps. If it does, post a comment.

    Read the article

  • Non-standard installation (installing Linux from Linux)

    - by Evan Plaice
    So, here's my setup. I have one partition with the newest version installed, a second partition with an older version installed (as a backup just in case), a swap partition that both share, and a boot partition so the bootloader doesn't need to be setup after each upgrade. Partitions: sda1 ext3 /boot sda2 ext4 / (current version) sda3 ext4 / (old version) sda4 swap /swap sda5 ntfs (contains folders symbolically linked to /home on /) So far it has been a very good setup. I can create new boot loaders without screwing it up and adding my personal files into a new install is as simple as creating some symbolic links (the partition is NTFS in case I need to load windows on the system again). Here's the issue. I'd like to be able to drop the install into /distro on the current version and install a new version on / on the old version effectively replacing/upgrading it. The goal is to be able to just swap out new versions as they are released while maintaining redundancy in case I don't like th update. So far I have: downloaded the install.iso created a folder in /distro copied the install.iso into /distro extracted vmlinuz and initrd.lz into /distro Then I modified /boot/grub/menu.lst with the following entry: title Install Linux root (hd0,1) kernel /distro/vmlinuz initrd /distro/initrd.lz vmlinuz loads perfectly but it says it can't find initrd.lz on boot. I have also tried to uncompress the image with: unlzma < initrd.lz > initrd.img And, updating the menu.lst file to match; but that doesn't work either. I'm assuming that vmlinuz (linux kernel) loads, fires up the virtual filesystem by creating a ramdisk (initrd), mounts the iso, and launches the installer. Am I missing something here? Update: First, I wanted to say that the accepted answer would have been the best option if I was doing a normal Ubuntu install. Unfortunately, I was installing Linux Mint (which lacks the script needed to make debootstrap work. So the problem I with the above approach was, I was missing the command that vmlinuz (linux kernel) needed to execute to start boot into LiveCD mode. By looking in the /boot/grub/grub.cfg file I found what I was missing. Although this method will work, it requires that the installation files reside on their own partition. I took the easy route and used unetbootin to drop the LiveCD on a usb drive and booted from that. Like I said before. Debootstrap would have been the ideal solution here. Even though I couldn't use it I wrote down the steps it would've taken to use it. Step One: Format sda3 (the partition with the old copy of linux that's being overwritten) I used gparted to format it as ext4 from within the current linux install. How this is done varies based on what tools you prefer to use. Step Two: Mount the newly formatted partition (we'll call the mount ubuntu for simplicity) sudo mkdir /mnt/ubuntu sudo mount -o -loop /dev/sda3 /mnt/ubuntu Step Three: Get debootstrap sudo apt-get install debootstrap Step Four: Mount the install disk (replace ubuntu.iso with the name if your install disk) sudo mkdir /media/cdrom sudo mount -o loop ~/ubuntu.iso /media/cdrom Step Five: Install the OS using debootstrap (replace fiesty with the version you're installing and amd64 with your processor's architecture) sudo debootstrap --arch amd64 fiesty /mnt/ubuntu file:/media/cdrom The settings here varies. While I loaded debootstrap using an install iso, you can also have debootstrap automatically download and install if with a repository link (While most of these repositories contain debian versions I'm still not clear as to whether Ubuntu has similar repositories). Here a list of the debian package repositories and their mirrors. This is how you'd deploy debootstrap if you were doing it directly from a repository: sudo debootstrap --arch amd64 squeeze /mnt/debian http://ftp.us.debian.org/debian Here's the link that I primarily used to figure this out.

    Read the article

  • concurrency::accelerator_view

    - by Daniel Moth
    Overview We saw previously that accelerator represents a target for our C++ AMP computation or memory allocation and that there is a notion of a default accelerator. We ended that post by introducing how one can obtain accelerator_view objects from an accelerator object through the accelerator class's default_view property and the create_view method. The accelerator_view objects can be thought of as handles to an accelerator. You can also construct an accelerator_view given another accelerator_view (through the copy constructor or the assignment operator overload). Speaking of operator overloading, you can also compare (for equality and inequality) two accelerator_view objects between them to determine if they refer to the same underlying accelerator. We'll see later that when we use concurrency::array objects, the allocation of data takes place on an accelerator at array construction time, so there is a constructor overload that accepts an accelerator_view object. We'll also see later that a new concurrency::parallel_for_each function overload can take an accelerator_view object, so it knows on what target to execute the computation (represented by a lambda that the parallel_for_each also accepts). Beyond normal usage, accelerator_view is a quality of service concept that offers isolation to multiple "consumers" of an accelerator. If in your code you are accessing the accelerator from multiple threads (or, in general, from different parts of your app), then you'll want to create separate accelerator_view objects for each thread. flush, wait, and queuing_mode When you create an accelerator_view via the create_view method of the accelerator, you pass in an option of immediate or deferred, which are the two members of the queuing_mode enum. At any point you can access this value from the queuing_mode property of the accelerator_view. When the queuing_mode value is immediate (which is the default), any commands sent to the device such as kernel invocations and data transfers (e.g. parallel_for_each and copy, as we'll see in future posts), will get submitted as soon as the runtime sees fit (that is the definition of immediate). When the value of queuing_mode is deferred, the commands will be batched up. To send all buffered commands to the device for execution, there is a non-blocking flush method that you can call. If you wish to block until all the commands have been sent, there is a wait method you can call. Deferring is a more advanced scenario aimed at performance gains when you are submitting many device commands and you want to avoid the tiny overhead of flushing/submitting each command separately. Querying information Just like accelerator, accelerator_view exposes the is_debug and version properties. In fact, you can always access the accelerator object from the accelerator property on the accelerator_view class to access the accelerator interface we looked at previously. Interop with D3D (aka DX) In a later post I'll show an example of an app that uses C++ AMP to compute data that is used in pixel shaders. In those scenarios, you can benefit by integrating C++ AMP into your graphics pipeline and one of the building blocks for that is being able to use the same device context from both the compute kernel and the other shaders. You can do that by going from accelerator_view to device context (and vice versa), through part of our interop API in amp.h: *get_device, create_accelerator_view. More on those in a later post. Comments about this post by Daniel Moth welcome at the original blog.

    Read the article

  • Willy Rotstein on Supply Chain Planning

    - by sarah.taylor(at)oracle.com
    Each time a merchandiser, buyer or planner in Retail makes a business decision around assortment, inventory, pricing and promotions there is an opportunity to improve both Profitability and Customer Service. Improving decision making, however, has always been a tricky business for retailers.  I have worked in this space for more than 15 years. I began my career as an academic, at Imperial College London, and then broadened this interest with Retailers, aiming to optimize their merchandising and supply chain decisions. Planning the business and optimizing profit is a complex process. The complexity arises from the variety of people involved, the large number of decisions to take across all business processes, the uncertainty intrinsic to the retail environment as well as the volume of data available for analysis.  Things are not getting any easier either. The advent of multi-channel, social media and mobile is taking these complexities to a new level and presenting additional opportunities for those willing to exploit them. I guess it is due to the complexities of the decision making process that, over the last couple of years working with Oracle Retail, I have witnessed a clear trend around the deployment of planning systems. Retailers are aiming to simplify their decision making processes. They want to use one joined up planning platform across the business and enhance it with "actionable" data mining and optimization techniques. At Oracle Retail, we have a vibrant community of international retailers who regularly come together to discuss the big issues in retail planning. It is a combination of fashion, grocery and speciality retailers, all sharing their best practice vision for planning and optimizing merchandise decisions. As part of the Retail Exchange program, at the recent National Retail Federation event in New York, I jointly hosted a Planning dinner with Peter Fitzgerald from Google UK, Retail Division. Those retailers from our international planning community who were in New York for the annual NRF event were able to attend. The group comprised some of Europe's great International Retail brands.  All sectors were represented by organisations like Mango, LVMH, Ahold, Morrisons, Shop Direct and River Island. They confirmed the current importance of engaging with Planning and Optimization issues. In particular the impact of the internet was a key topic. We had a great debate about new retail initiatives.  Peter highlighted how mobility is changing retail - in particular with the new "local availability search" initiative. We also had an exciting discussion around the opportunities to improve merchandising using the new data that is becoming available from search, social media and ecommerce sites. It will be our focus to continue to help retailers translate this data into better results while keeping their business operations simple. New developments in "actionable" analytics and computing capacity make this a very exciting area today. Watch this space for my contributions on these topics which will be made available through this blog. Oracle Retail has a strong Planning community. if you are a category manager, a planner, a buyer, a merchandiser, a retail supplier or any retail executive with a keen interest in planning then you would be very welcome to join Oracle Retail's Planning Community. As part of our community you will be able to join our in-person and virtual events, download topical white papers and best practice information specifically tailored to your area of interest.  If anyone would like to register their interest in joining our community of retailers discussing planning then please contact me at [email protected]   Willy Rotstein, Oracle Retail

    Read the article

  • Configuring Multiple Instances of MySQL in Solaris 11

    - by rajeshr
    Recently someone asked me for steps to configure multiple instances of MySQL database in an Operating Platform. Coz of my familiarity with Solaris OE, I prepared some notes on configuring multiple instances of MySQL database on Solaris 11. Maybe it's useful for some: If you want to run Solaris Operating System (or any other OS of your choice) as a virtualized instance in desktop, consider using Virtual Box. To download Solaris Operating System, click here. Once you have your Solaris Operating System (Version 11) up and running and have Internet connectivity to gain access to the Image Packaging System (IPS), please follow the steps as mentioned below to install MySQL and configure multiple instances: 1. Install MySQL Database in Solaris 11 $ sudo pkg install mysql-51 2. Verify if the mysql is installed: $ svcs -a | grep mysql Note: Service FMRI will look similar to the one here: svc:/application/database/mysql:version_51 3. Prepare data file system for MySQL Instance 1 zfs create rpool/mysql zfs create rpool/mysql/data zfs set mountpoint=/mysql/data rpool/mysql/data 4. Prepare data file system for MySQL Instance 2 zfs create rpool/mysql/data2 zfs set mountpoint=/mysql/data rpool/mysql/data2 5. Change the mysql/datadir of the MySQL Service (SMF) to point to /mysql/data $ svcprop mysql:version_51 | grep mysql/data $ svccfg -s mysql:version_51 setprop mysql/data=/mysql/data 6. Create a new instance of MySQL 5.1 (a) Copy the manifest of the default instance to temporary directory: $ sudo cp /lib/svc/manifest/application/database/mysql_51.xml /var/tmp/mysql_51_2.xml (b) Make appropriate modifications on the XML file $ sudo vi /var/tmp/mysql_51_2.xml - Change the "instance name" section to a new value "version_51_2" - Change the value of property name "data" to point to the ZFS file system "/mysql/data2" 7. Import the manifest to the SMF repository: $ sudo svccfg import /var/tmp/mysql_51_2.xml 8. Before starting the service, copy the file /etc/mysql/my.cnf to the data directories /mysql/data & /mysql/data2. $ sudo cp /etc/mysql/my.cnf /mysql/data/ $ sudo cp /etc/mysql/my.cnf /mysql/data2/ 9. Make modifications to the my.cnf in each of the data directories as required: $ sudo vi /mysql/data/my.cnf Under the [client] section port=3306 socket=/tmp/mysql.sock ---- ---- Under the [mysqld] section port=3306 socket=/tmp/mysql.sock datadir=/mysql/data ----- ----- server-id=1 $ sudo vi /mysql/data2/my.cnf Under the [client] section port=3307 socket=/tmp/mysql2.sock ----- ----- Under the [mysqld] section port=3307 socket=/tmp/mysql2.sock datadir=/mysql/data2 ----- ----- server-id=2 10. Make appropriate modification to the startup script of MySQL (managed by SMF) to point to the appropriate my.cnf for each instance: $ sudo vi /lib/svc/method/mysql_51 Note: Search for all occurences of mysqld_safe command and modify it to include the --defaults-file option. An example entry would look as follows: ${MySQLBIN}/mysqld_safe --defaults-file=${MYSQLDATA}/my.cnf --user=mysql --datadir=${MYSQLDATA} --pid=file=${PIDFILE} 11. Start the service: $ sudo svcadm enable mysql:version_51_2 $ sudo svcadm enable mysql:version_51 12. Verify that the two services are running by using: $ svcs mysql 13. Verify the processes: $ ps -ef | grep mysqld 14. Connect to each mysqld instance and verify: $ mysql --defaults-file=/mysql/data/my.cnf -u root -p $ mysql --defaults-file=/mysql/data2/my.cnf -u root -p Some references for Solaris 11 newbies Taking your first steps with Solaris 11 Introducing the basics of Image Packaging System Service Management Facility How To Guide For a detailed list of official educational modules available on Solaris 11, please visit here For MySQL courses from Oracle University access this page.

    Read the article

  • Introducing Oracle System Assistant

    - by B.Koch
    by Josh Rosen One of the challenges with today's servers is getting the server up and running and understanding what all of the steps are once you plug the server in for the first time. So many different pieces come into play: installing drivers, updating firmware, configuring RAID, and provisioning the operating system. All of these steps must be done before you can even start using the server. Finding the latest firmware and drivers, making sure you have the right versions, and knowing that all the different software and firmware components work together properly can be a real challenge. If not done correctly, such as if you separately downloading disk firmware or controller firmware that doesn't match the existing OS drivers, you could experience bugs, performance problems, and incompatibilities. Gone are the days of having to locate the tools and drivers media that shipped with the server only to find out that newer versions of software and firmware are available on the web. Oracle has solved these challenges in the new X3-2 family of servers by introducing Oracle System Assistant. Oracle System Assistant is an innovative tool that is built-in to every new x86 server. It provides step-by-step assistance with configuring the server, updating firmware and drivers, and provisioning the operating system. Once you have completed all of the steps in the Oracle System Assistant tool, the server is ready to use. Oracle System Assistant was designed to be easy and straightforward. Starting it is as simple as pressing F9 when the server is booting. You'll need a keyboard, monitor, and mouse or you can use the remote console feature of Oracle ILOM (Integrated Lights Out Manager) to access a virtual KVM to the server from any machine. From there Oracle System Assistant will walk you through each of the steps necessary to set up your server. After configuring the network settings for Oracle System Assistant, the next step is to check for any new software or firmware for the server. Oracle System Assistant connects back to Oracle using your My Oracle Support account and downloads any updates that were made available to you for this specific server. This is where you really start to see the innovation that went into Oracle System Assistant. Firmware for Oracle ILOM and BIOS, operating system drivers, and other system firmware (including for option cards and disk drivers) come as a single bundle, downloading as a single unit, that has been engineered and tested to work together by Oracle. Oracle System Assistant figures out the right combination for your server, so you don't have to. Now that the server has the latest firmware, Oracle System Assistant will next walk you through configuring the hardware. From Oracle System Assistant, you can configure many Oracle ILOM settings, including the network settings and initial user accounts. This ensures that ILOM is accessible and ready to use. Oracle System Assistant is where all parts of the server come together. In addition to communicating with Oracle ILOM and interacting with BIOS, Oracle System Assistant understands and can configure the storage subsystem. Before installing the operating system, Oracle System Assistant can detect the storage configuration and configure RAID for all disks in the system. At this point, the server is ready to be provisioned with the host operating system. You can use Oracle System Assistant to provision a supported OS, including Oracle Linux, Oracle VM, RHEL, SuSe Linux, and Windows. And by using Oracle System Assistant, you can be sure that the proper OS drivers are installed for each of the installed hardware components. With Oracle System Assistant, initial setup of the server has never been easier. If we can innovate around problems and find solutions to make our servers easier to manage, this reduces IT costs and makes managing servers simpler. I think with Oracle System Assistant we have done just that. Josh Rosen is a Principal Product Manager at Oracle and previously spent more than a decade as a developer and architect of system management software. Josh has worked on system management for many of Oracle's hardware products ranging from the earliest blade systems to the latest Oracle x86 servers.

    Read the article

  • Analysis Services (SSAS) - Unexpected Internal Error when processing (ProcessUpdate). Workaround/Resolution

    - by James Rogers
    Many implementations require the use of ProcessUpdate to support Type 1 slowly changing dimensions. ProcessUpdate drops all of the affected indexes and aggregations in partitions affected by data that changes in the Dimension on which the ProcessUpdate is being performed. Twice now I have had situations where the processing fails with "Internal error: An unexpected exception occurred." Any subsequent ProcessUpdate processing will also fail with the same error. In talking with Microsoft the issue is corrupt indexes for the Dimension(s) being processed in the partitions of the affected measure group. I cannot guarantee that the following will correct your problem but it did in my case and saved us quite a bit of down time.   Workaround: ProcessIndexes on the entire cube that is being processed and throwing the error. This corrected the problem on both 2008 and 2008 R2.   Pros:  Does not require a complete rebuild of the data (ProcessFull) for either the Dimension or Cube. User access can continue while this ProcessIndexes in underway.   Cons: Can take a long time, especially on large cubes with many partitions, dimensions and/or aggregations. Query Performance is usually severely impacted due to the memory and CPU requirements for Aggregation and Index building   <Batch http://schemas.microsoft.com/analysisservices/2003/engine"http://schemas.microsoft.com/analysisservices/2003/engine">  <Parallel>     <Process xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:ddl2="http://schemas.microsoft.com/analysisservices/2003/engine/2" xmlns:ddl2_2="http://schemas.microsoft.com/analysisservices/2003/engine/2/2" xmlns:ddl100_100="http://schemas.microsoft.com/analysisservices/2008/engine/100/100" xmlns:ddl200="http://schemas.microsoft.com/analysisservices/2010/engine/200" xmlns:ddl200_200="http://schemas.microsoft.com/analysisservices/2010/engine/200/200">       <Object>         <DatabaseID>MyDatabase</DatabaseID>         <CubeID>MyCube</CubeID>       </Object>       <Type>ProcessIndexes</Type>       <WriteBackTableCreation>UseExisting</WriteBackTableCreation>     </Process>  </Parallel> </Batch>   The cube where the corruption exists can be found by having Profiler running while the ProcessUpdate is executing. The first partition that displays the "The Job has ended in failure." message in the TextData column will be part of the cube/measuregroup that has the corruption. You can try to run ProcessIndexes on just that measure group. This may correct the problem and save additional time if you have other large measure groups in the cube that are not affected by the corruption.   Remember to execute your normal ProcessUpdate batch after the successful completion of the ProcessIndexes. The ProcessIndexes does not pick up data changes.   Things that did not work: ProcessClearIndexes - why this doesn't work and ProcessIndexes does is unclear at this point. ProcessFull on the partition in question. In my latest case, this would clear up the problem for that partition. However, the next partition the ProcessUpdate touched that had data in it would generate and error. This leads me to believe the corruption problem will exist in all partitions in the affected measure group that have data in them.   NOTE: I experience this problem in both a SQL 2008 and SQL 2008 R2 Analysis Services environment, on separate built from the same relational database. This leads me to believe that some data condition in the tables used for the Dimension processing caused the corruption since the two environments were on physically separate hardware. I am waiting on Microsoft to analyze the dumps to give us more insight into what actually caused the corruption and will update this post accordingly.

    Read the article

  • OBIEE 11.1.1 - How to Enable Caching in Internet Information Services (IIS) 7.0+

    - by Ahmed A
    Follow these steps to configure static file caching and content expiration if you are using IIS 7.0 Web Server with Oracle Business Intelligence. Tip: Install IIS URL Rewrite that enables Web administrators to create powerful outbound rules. Following are the steps to set up static file caching for IIS 7.0+ Web Server: 1. In “web.config” file for OBIEE static files virtual directory (ORACLE_HOME/bifoundation/web/app) add the following highlight in bold the outbound rule for caching:<?xml version="1.0" encoding="UTF-8"?><configuration>    <system.webServer>        <urlCompression doDynamicCompression="true" />        <rewrite>            <outboundRules>                <rule name="header1" preCondition="FilesMatch" patternSyntax="Wildcard">                    <match serverVariable="RESPONSE_CACHE_CONTROL" pattern="*" />                    <action type="Rewrite" value="max-age=604800" />                </rule>                <preConditions>    <preCondition name="FilesMatch">                        <add input="{RESPONSE_CONTENT_TYPE}" pattern="^text/css|^text/x-javascript|^text/javascript|^image/gif|^image/jpeg|^image/png" />                    </preCondition>                </preConditions>            </outboundRules>        </rewrite>    </system.webServer></configuration>2. Restart IIS. Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}

    Read the article

  • SQLAuthority News – History of the Database – 5 Years of Blogging at SQLAuthority

    - by pinaldave
    Don’t miss the Contest:Participate in 5th Anniversary Contest   Today is this blog’s birthday, and I want to do a fun, informative blog post. Five years ago this day I started this blog. Intention – my personal web blog. I wrote this blog for me and still today whatever I learn I share here. I don’t want to wander too far off topic, though, so I will write about two of my favorite things – history and databases.  And what better way to cover these two topics than to talk about the history of databases. If you want to be technical, databases as we know them today only date back to the late 1960’s and early 1970’s, when computers began to keep records and store memories.  But the idea of memory storage didn’t just appear 40 years ago – there was a history behind wanting to keep these records. In fact, the written word originated as a way to keep records – ancient man didn’t decide they suddenly wanted to read novels, they needed a way to keep track of the harvest, of their flocks, and of the tributes paid to the local lord.  And that is how writing and the database began.  You could consider the cave paintings from 17,0000 years ago at Lascaux, France, or the clay token from the ancient Sumerians in 8,000 BC to be the first instances of record keeping – and thus databases. If you prefer, you can consider the advent of written language to be the first database.  Many historians believe the first written language appeared in the 37th century BC, with Egyptian hieroglyphics. The ancient Sumerians, not to be outdone, also created their own written language within a few hundred years. Databases could be more closely described as collections of information, in which case the Sumerians win the prize for the first archive.  A collection of 20,000 stone tablets was unearthed in 1964 near the modern day city Tell Mardikh, in Syria.  This ancient database is from 2,500 BC, and appears to be a sort of law library where apprentice-scribes copied important documents.  Further archaeological digs hope to uncover the palace library, and thus an even larger database. Of course, the most famous ancient database would have to be the Royal Library of Alexandria, the great collection of records and wisdom in ancient Egypt.  It was created by Ptolemy I, and existed from 300 BC through 30 AD, when Julius Caesar effectively erased the hard drives when he accidentally set fire to it.  As any programmer knows who has forgotten to hit “save” or has experienced a sudden power outage, thousands of hours of work was lost in a single instant. Databases existed in very similar conditions up until recently.  Cuneiform tablets gave way to papyrus, which led to vellum, and eventually modern paper and the printing press.  Someday the databases we rely on so much today will become another chapter in the history of record keeping.  Who knows what the databases of tomorrow will look like! Reference:  Pinal Dave (http://blog.SQLAuthority.com) Filed under: About Me, Database, Pinal Dave, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQLServer, T SQL, Technology

    Read the article

  • SQL SERVER – Learn SQL Server 2014 Online in a Day – My Latest Pluralsight Course

    - by Pinal Dave
    Click here watch SQL Server 2014 Administration New Features.  SQL Server 2014 was released earlier this year and it has been extremely popular in Microsoft world. Here is the announcement for everyone, who have been asking me to build a tutorial around SQL Server 2014. I have authored latest Pluralsight courses on the subject of SQL Server 2014. This course is 4 hours and 17 minutes long, but the best part is that this course contains all the latest features of SQL Server 2014. I have build this course with the assumption that DBA is familiar with earlier versions of SQL Server and wants to explore and learn new features of SQL Server 2014. The Challenge I Faced The biggest challenge I faced was how to come up with the outline for the course. The reason is that there are so many different features introduced in SQL Server 2014 that is will be difficult to cover each of the features in a single course. I wanted to cover the topics which are the most relevant and useful to developers, but in addition I also wanted to cover the topics which may be useful to develop if they know that they exists in the product. I finally decided to depend on blog readers and few of the SQL Experts. I reached out to selected 20 people via email and gave them a list of the topics which I should be covering in this course. They all work in different organizations and have a good understanding about the need of the DBA and Developers. Based on their feedback, I was able to come up with a very good outline which is currently very popular with Pluralsight library. Lots of people have asked me how was I able to come up with a course content outline so accurately. The credit for the same goes to the developers and DBA, who have voted in the topics and have helped me to build a very solid outline for the course. Outline of the Course Here is a quick outline for the course: Introduction Backup Enhancements Security Enhancements Columnstore Enhancements Online Data Operations Enhancements Enhancements with Microsoft Azure SSD Buffer Pool Extensions Resource Governor IO Miscellaneous Features Online Index Rebuilding Live Plans for Long Running Queries Transaction Durability Cardinality Estimation In Memory OLTP Optimization Well, I had a great fun working on the topics which I have mentioned in the outline. I am very confident that once you start with the course, you will indeed understand how each of the topics builds and presented. I have made sure that each of the topic has a vivid and clear story to begin with. I first explain the story and right after that I explain the concept. Who Should Attend This Course Everyone who has basic knowledge of SQL Server and wants to update themselves with SQL Server 2014. They should attend this course. One thing I have made sure that this course is easy to understand and I have decided complex subject into multiple parts. This way the learning is progressive and anyone with a poor knowledge of the subject can have enough time to understand the presented concept. Screenshot of the Course Here are few of the screenshot of the courses. How to Watch Video Course This course is available at Pluralsight, and you will need a valid login to Pluralsight. If you do not have Pluralsight login, you can quickly sign up for the FREE Trial. Click here watch SQL Server 2014 Administration New Features.  Reference: Pinal Dave (http://blog.SQLAuthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQL Training, SQLAuthority News, T SQL, Video

    Read the article

  • Dual boot Ubuntu and Windows 7, I Can only boot ubuntu through recovery mode

    - by Alec
    I want to become a new user of Ubuntu, however this problem is preventing me. I have/had Window 7 professional on my computer. I recently looked into getting linux. I discovered dual-booting and decided to give it a try. First I created a bootable flash drive with ubuntu 12.10 64 bit. I then followed the instructions on: https://help.ubuntu.com/community/WindowsDualBoot after I finished going through the setup, my computer rebooted. After the reboot I was able to select Ubuntu, advanced options for Ubuntu, 2 memory tests, and windows 7 (loader). So I chose Windows ( honestly i was more concerned that i still had everything on windows at this point). I then rebooted again and selected Ubuntu. When i selected Ubuntu, the background screen of Grub (the crimson/burgandy color) stayed for a few seconds then the screen went black: video here http://www.youtube.com/watch?v=6kKcG4sT7Lg&feature=plcp I tried again with the same results. so i redid the ubuntu install differently using http://www.liberiangeek.net/2012/10/dual-booting-windows-7-and-ubuntu-12-10-quantal-quetzal/. After rebooting the same thing happened. After that i was stumped, so i figured it could hurt to experiment. after all i backed up my windows 7 stuff, and i have the software disk. I tried booting in recovery mode under "advanced options for Ubuntu" and sure enough, after selecting continue to normal reboot it worked. So i updated and everything but when i rebooted it still wouldn't boot under Ubuntu. It would always boot after recovery mode. So i try installing 12.10 32 bit Ubuntu. the same problem keeps happening. I can still get to Ubuntu through recovery mode. so i went online and tried using the terminal (in ubuntu that i booted through recovery mode) when i was using it i discovered that "Error in sitecustomize; set PYTHONVERBOSE for traceback: EOFError: EOF read where not expected" kept showing up. also i noticed a notification in the top right corner that looked like a do not enter sign. it said "an error occured, please run package manager from the right-click menu or apt-get in a terminal to see what is wrong. the error message was: 'ror in sitecustomize;set PYTHONVERBOSE for traceback: EOFError: EOF read where not expected traceback (most recent called last): File "/usr/bin/lsb_release EOFError: EOF read where not expected 39;0' this usually means that your installed packages have unmet dependencies" Naturally i assumed this was what was causing my boot problems. I downloaded synaptic and updated everything and the error went away. but my boot problem was still a problem. so i go online find some things that have worked for others, like this Try to do this (in your terminal: sudo nano /etc/default/grub Look for: GRUB_CMDLINE_LINUX_DEFAULT="quiet splash" Change it too : GRUB_CMDLINE_LINUX_DEFAULT="quiet" And update Grub: sudo update-grub This should fix stuff.) I did this and i still have the problem. sorry for the excessive explanation, please help.

    Read the article

  • System.Threading.ThreadAbortException executing WCF service

    - by SURESH GIRIRAJAN
    In one of our prod server we recently ran into issue when we went and update the web.config and try to browse the service. We started seeing the service was not responding and getting the following warning in the application log. Our service is WCF service, BizTalk orchestration exposed as service. We have other prod server where we never ran into this issue, so what’s different with this server. After going thru lot of forum and came up on some Microsoft service pack and hot fix which related to FCN. But I don’t want to apply any patch on this server then we need to do on all the other servers too. So solution is simple, I dropped the existing website, created a new site with different name with updated web.config browse the service. Then dropped that site and recreate the original web site and it worked fine without any issue. Event Viewer:  Event Type:        Warning Event Source:    ASP.NET 2.0.50727.0 Event Category:                Web Event Event ID:              1309 Date:                     6/6/2011 Time:                    5:41:42 PM User:                     N/A Computer:          PRODP02 Description: Event code: 3005 Event message: An unhandled exception has occurred. Event time: 6/6/2011 5:41:42 PM Event time (UTC): 6/6/2011 9:41:42 PM Event ID: a71769f42b304355a58c482bfec267f2 Event sequence: 3 Event occurrence: 1 Event detail code: 0  Application information:     Application domain: /LM/W3SVC/518296899/ROOT/PortArrivals-2-129518698821558995     Trust level: Full     Application Virtual Path: /TESTSVC     Application Path: D:\inetpub\wwwroot\RFID\TESTSVC\     Machine name: PRODP02  Process information:     Process ID: 8752     Process name: w3wp.exe     Account name: domain\BizTalk_Svc_Hostlso  Exception information:     Exception type: ThreadAbortException     Exception message: Thread was being aborted.  Request information:     Request URL: http://localhost:81/TESTSVC/TESTSVCS.svc     Request path: /TESTSVC/TESTSVCS.svc     User host address: 127.0.0.1     User:      Is authenticated: False     Authentication Type:      Thread account name: domain\BizTalk_Svc_Hostlso  Thread information:     Thread ID: 22     Thread account name: domain\BizTalk_Svc_Hostlso     Is impersonating: False     Stack trace:    at System.Web.HttpApplication.ExecuteStep(IExecutionStep step, Boolean& completedSynchronously)    at System.Web.HttpApplication.ApplicationStepManager.ResumeSteps(Exception error)  at System.Web.HttpApplication.System.Web.IHttpAsyncHandler.BeginProcessRequest(HttpContext context, AsyncCallback cb, Object extraData)    at System.Web.HttpRuntime.ProcessRequestInternal(HttpWorkerRequest wr)  <Description>Handling an exception.</Description> <AppDomain>/LM/W3SVC/518296899/ROOT/TESTSVC-6-129518741899334691</AppDomain> <Exception> <ExceptionType>System.Threading.ThreadAbortException, mscorlib, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089</ExceptionType> <Message>Thread was being aborted.</Message> <StackTrace> at System.Threading.Monitor.Enter(Object obj) at System.ServiceModel.ServiceHostingEnvironment.HostingManager.EnsureServiceAvailable(String normalizedVirtualPath) at System.ServiceModel.ServiceHostingEnvironment.EnsureServiceAvailableFast(String relativeVirtualPath) at System.ServiceModel.Activation.HostedHttpRequestAsyncResult.HandleRequest() at System.ServiceModel.Activation.HostedHttpRequestAsyncResult.BeginRequest() at System.ServiceModel.Activation.HostedHttpRequestAsyncResult.OnBeginRequest(Object state) at System.ServiceModel.Channels.IOThreadScheduler.CriticalHelper.WorkItem.Invoke2() at System.ServiceModel.Channels.IOThreadScheduler.CriticalHelper.WorkItem.Invoke() at System.ServiceModel.Channels.IOThreadScheduler.CriticalHelper.ProcessCallbacks() at System.ServiceModel.Channels.IOThreadScheduler.CriticalHelper.CompletionCallback(Object state) at System.ServiceModel.Channels.IOThreadScheduler.CriticalHelper.ScheduledOverlapped.IOCallback(UInt32 errorCode, UInt32 numBytes, NativeOverlapped* nativeOverlapped) at System.ServiceModel.Diagnostics.Utility.IOCompletionThunk.UnhandledExceptionFrame(UInt32 error, UInt32 bytesRead, NativeOverlapped* nativeOverlapped) </StackTrace> <ExceptionString>System.Threading.ThreadAbortException: Thread was being aborted.    at System.Threading.Monitor.Enter(Object obj)    at System.ServiceModel.ServiceHostingEnvironment.HostingManager.EnsureServiceAvailable(String normalizedVirtualPath)    at System.ServiceModel.ServiceHostingEnvironment.EnsureServiceAvailableFast(String relativeVirtualPath)    at System.ServiceModel.Activation.HostedHttpRequestAsyncResult.HandleRequest()    at System.ServiceModel.Activation.HostedHttpRequestAsyncResult.BeginRequest()    at System.ServiceModel.Activation.HostedHttpRequestAsyncResult.OnBeginRequest(Object state)    at System.ServiceModel.Channels.IOThreadScheduler.CriticalHelper.WorkItem.Invoke2()    at System.ServiceModel.Channels.IOThreadScheduler.CriticalHelper.WorkItem.Invoke()    at System.ServiceModel.Channels.IOThreadScheduler.CriticalHelper.ProcessCallbacks()    at System.ServiceModel.Channels.IOThreadScheduler.CriticalHelper.CompletionCallback(Object state)    at System.ServiceModel.Channels.IOThreadScheduler.CriticalHelper.ScheduledOverlapped.IOCallback(UInt32 errorCode, UInt32 numBytes, NativeOverlapped* nativeOverlapped)    at System.ServiceModel.Diagnostics.Utility.IOCompletionThunk.UnhandledExceptionFrame(UInt32 error, UInt32 bytesRead, NativeOverlapped* nativeOverlapped)</ExceptionString>

    Read the article

  • Database – Beginning with Cloud Database As A Service

    - by Pinal Dave
    I love my weekend projects. Everybody does different activities in their weekend – like traveling, reading or just nothing. Every weekend I try to do something creative and different in the database world. The goal is I learn something new and if I enjoy my learning experience I share with the world. This weekend, I decided to explore Cloud Database As A Service – Morpheus. In my career I have managed many databases in the cloud and I have good experience in managing them. I should highlight that today’s applications use multiple databases from SQL for transactions and analytics, NoSQL for documents, In-Memory for caching to Indexing for search.  Provisioning and deploying these databases often require extensive expertise and time.  Often these databases are also not deployed on the same infrastructure and can create unnecessary latency between the application layer and the databases.  Not to mention the different quality of service based on the infrastructure and the service provider where they are deployed. Moreover, there are additional problems that I have experienced with traditional database setup when hosted in the cloud: Database provisioning & orchestration Slow speed due to hardware issues Poor Monitoring Tools High network latency Now if you have a great software and expert network engineer, you can continuously work on above problems and overcome them. However, not every organization have the luxury to have top notch experts in the field. Now above issues are related to infrastructure, but there are a few more problems which are related to software/application as well. Here are the top three things which can be problems if you do not have application expert: Replication and Clustering Simple provisioning of the hard drive space Automatic Sharding Well, Morpheus looks like a product build by experts who have faced similar situation in the past. The product pretty much addresses all the pain points of developers and database administrators. What is different about Morpheus is that it offers a variety of databases from MySQL, MongoDB, ElasticSearch to Reddis as a service.  Thus users can pick and chose any combination of these databases.  All of them can be provisioned in a matter of minutes with a simple and intuitive point and click user interface.  The Morpheus cloud is built on Solid State Drives (SSD) and is designed for high-speed database transactions.  In addition it offers a direct link to Amazon Web Services to minimize latency between the application layer and the databases. Here are the few steps on how one can get started with Morpheus. Follow along with me.  First go to http://www.gomorpheus.com and register for a new and free account. Step 1: Signup It is very simple to signup for Morpheus. Step 2: Select your database   I use MySQL for my daily routine, so I have selected MySQL. Upon clicking on the big red button to add Instance, it prompted a dialogue of creating a new instance.   Step 3: Create User Now we just have to create a user in our portal which we will use to connect to a database hosted at Morpheus. Click on your database instance and it will bring you to User Screen. Over here you will notice once again a big red button to create a new user. I created a user with my first name.   Step 4: Configure your MySQL client I used MySQL workbench and connected to MySQL instance, which I had created with an IP address and user.   That’s it! You are connecting to MySQL instance. Now you can create your objects just like you would create on your local box. You will have all the features of the Morpheus when you are working with your database. Dashboard While working with Morpheus, I was most impressed with its dashboard. In future blog posts, I will write more about this feature.  Also with Morpheus you use the same process for provisioning and connecting with other databases: MongoDB, ElasticSearch and Reddis. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: MySQL, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • Entity Framework 4, WCF &amp; Lazy Loading Tip

    - by Dane Morgridge
    If you are doing any work with Entity Framework and custom WCF services in EFv1, everything works great.  As soon as you jump to EFv4, you may find yourself getting odd errors that you can’t seem to catch.  The problem is almost always has something to do with the new lazy loading feature in Entity Framework 4.  With Entity Framework 1, you didn’t have lazy loading so this problem didn’t surface.  Assume I have a Person entity and an Address entity where there is a one-to-many relationship between Person and Address (Person has many Addresses). In Entity Framework 1 (or in EFv4 with lazy loading turned off), I would have to load the Address data by hand by either using the Include or Load Method: var people = context.People.Include("Addresses"); or people.Addresses.Load(); Lazy loading works when the first time the Person.Addresses collection is accessed: 1: var people = context.People.ToList(); 2:  3: // only person data is currently in memory 4:  5: foreach(var person in people) 6: { 7: // EF determines that no Address data has been loaded and lazy loads 8: int count = person.Addresses.Count(); 9: } 10:  Lazy loading has the useful (and sometimes not useful) feature of fetching data when requested.  It can make your life easier or it can make it a big pain.  So what does this have to do with WCF?  One word: Serialization. When you need to pass data over the wire with WCF, the data contract is serialized into either XML or binary depending on the binding you are using.  Well, if I am using lazy loading, the Person entity gets serialized and during that process, the Addresses collection is accessed.  When that happens, the Address data is lazy loaded.  Then the Address is serialized, and the Person property is accessed, and then also serialized and then the Addresses collection is accessed.  Now the second time through, lazy loading doesn’t kick in, but you can see the infinite loop caused by this process.  This is a problem with any serialization, but I personally found it trying to use WCF. The fix for this is to simply turn off lazy Loading.  This can be done at each call by using context options: context.ContextOptions.LazyLoadingEnabled = false; Turning lazy loading off will now allow your classes to be serialized properly.  Note, this is if you are using the standard Entity Framework classes.  If you are using POCO,  you will have to do something slightly different.  With POCO, the Entity Framework will create proxy classes by default that allow things like lazy loading to work with POCO.  This proxy basically creates a proxy object that is a full Entity Framework object that sits between the context and the POCO object.  When using POCO with WCF (or any serialization) just turning off lazy loading doesn’t cut it.  You have to turn off the proxy creation to ensure that your classes will serialize properly: context.ContextOptions.ProxyCreationEnabled = false; The nice thing is that you can do this on a call-by-call basis.  If you use a new context for each set of operations (which you should) then you can turn either lazy loading or proxy creation on and off as needed.

    Read the article

  • Visual Studio Exceptions dialogs

    - by Daniel Moth
    Previously I covered step 1 of live debugging with start and attach. Once the debugger is attached, you want to go to step 2 of live debugging, which is to break. One way to break under the debugger is to do nothing, and just wait for an exception to occur in your code. This is true for all types of code that you debug in Visual Studio, and let's consider the following piece of C# code:3: static void Main() 4: { 5: try 6: { 7: int i = 0; 8: int r = 5 / i; 9: } 10: catch (System.DivideByZeroException) {/*gulp. sue me.*/} 11: System.Console.ReadLine(); 12: } If you run this under the debugger do you expect an exception on line 8? It is a trick question: you have to know whether I have configured the debugger to break when exceptions are thrown (first-chance exceptions) or only when they are unhandled. The place you do that is in the Exceptions dialog which is accessible from the Debug->Exceptions menu and on my installation looks like this: Note that I have checked all CLR exceptions. I could have expanded (like shown for the C++ case in my screenshot) and selected specific exceptions. To read more about this dialog, please read the corresponding Exception Handling debugging msdn topic and all its subtopics. So, for the code above, the debugger will break execution due to the thrown exception (exactly as if the try..catch was not there), so I see the following Exception Thrown dialog: Note the following: I can hit continue (or hit break and then later continue) and the program will continue fine since I have a catch handler. If this was an unhandled exception, then that is what the dialog would say (instead of first chance exception) and continuing would crash the app. That hyperlinked text ("Open Exception Settings") opens the Exceptions dialog I described further up. The coolest thing to note is the checkbox - this is new in this latest release of Visual Studio: it is a shortcut to the checkbox in the Exceptions dialog, so you don't have to open it to change this setting for this specific exception - you can toggle that option right from this dialog. Finally, if you try the code above on your system, you may observe a couple of differences from my screenshots. The first is that you may have an additional column of checkboxes in the Exceptions dialog. The second is that the last dialog I shared may look different to you. It all depends on the Debug->Options settings, and the two relevant settings are in this screenshot: The Exception assistant is what configures the look of the UI when the debugger wants to indicate exception to you, and the Just My Code setting controls the extra column in the Exception dialog. You can read more about those options on MSDN: How to break on User-Unhandled exceptions (plus Gregg’s post) and Exception Assistant. Before I leave you to go play with this stuff a bit more, please note that this level of debugging is now available for JavaScript too, and if you are looking at the Exceptions dialog and wondering what the "GPU Memory Access Exceptions" node is about, stay tuned on the C++ AMP blog ;-) Comments about this post by Daniel Moth welcome at the original blog.

    Read the article

  • Ongoing confusion about ivars and properties in objective C

    - by Earl Grey
    After almost 8 months being in ios programming, I am again confused about the right approach. Maybe it is not the language but some OOP principle I am confused about. I don't know.. I was trying C# a few years back. There were fields (private variables, private data in an object), there were getters and setters (methods which exposed something to the world) ,and properties which was THE exposed thing. I liked the elegance of the solution, for example there could be a class that would have a property called DailyRevenue...a float...but there was no private variable called dailyRevenue, there was only a field - an array of single transaction revenues...and the getter for DailyRevenue property calculated the revenue transparently. If somehow the internals of daily revenue calculation would change, it would not affect somebody who consumed my DailyRevenue property in any way, since he would be shielded from getter implementation. I understood that sometimes there was , and sometimes there wasn't a 1-1 relationship between fields and properties. depending on the requirements. It seemed ok in my opinion. And that properties are THE way to acces the data in object. I know the difference betweeen private, protected, and public keyword. Now lets get to objectiveC. On what factor should I base my decision about making someting only an ivar or making it as a property? Is the mental model the same as I describe above? I know that ivars are "protected" by default, not "private" asi in c#..But thats ok I think, no big deal for my presnet level of understanding the whole ios development. The point is ivars are not accesible from outside (given i don't make them public..but i won't). The thing that clouds my clear understanding is that I can have IBOutlets from ivars. Why am I seeing internal object data in the UI? *Why is it ok?* On the other hand, if I make an IBOutlet from property, and I do not make it readonly, anybody can change it. Is this ok too? Let's say I have a ParseManager object. This object would use a built in Foundation framework class called NSXMLParser. Obviously my ParseManager will utilize this nsxmlparser's capabilities but will also do some additional work. Now my question is, who should initialize this NSXMLParser object and in which way should I make a reference to it from the ParseManager object, when there is a need to parse something. A) the ParseManager -1) in its default init method (possible here ivar - or - ivar+ppty) -2) with lazyloading in getter (required a ppty here) B) Some other object - who will pass a reference to NSXMLParser object to the ParseManager object. -1) in some custom initializer (initWithParser:(NSXMLPArser *) parser) when creating the ParseManager object.. A1 - the problem is, we create a parser and waste memory while it is not yet needed. However, we can be sure that all methods that are part ot ParserManager object, can use the ivar safely, since it exists. A2 - the problem is, the nsxmlparser is exposed to outside world, although it could be read only. Would we want a parser to be exposed in some scenario? B1 - this could maybe be useful when we would want to use more types of parsers..i dont know... I understand that architectural requirements and and language is not the same. But clearly the two are in relation. How to get out of that mess of my? Please bear with me, I wasn't able to come up with a single ultimate question. And secondly, it's better to not scare me with some superadvanced newspeak that talks about some crazy internals (what the compiler does) and edge cases.

    Read the article

  • E-Business Suite Plug-in 12.1.0.1 for Enterprise Manager 12c Now Available

    - by Steven Chan (Oracle Development)
    Oracle E-Business Suite Plug-in 12.1.0.1.0 is now available for use with Oracle Enterprise Manager 12c.  Oracle E-Business Suite Plug-in 12.1.0.1 is an integral part of Oracle Enterprise Manager 12 Application Management Suite for Oracle E-Business Suite. This latest plug-in extends EM 12c Cloud Control with E-Business Suite specific system management capabilities and features enhanced change management support. The Oracle Enterprise Manager 12c Application Management Suite for Oracle E-Business Suite includes: Oracle E-Business Suite Plug-in 12.1.0.1 combines functionality that was available in the previously-standalone Application Management Pack for Oracle E-Business Suite and Application Change Management Pack for Oracle E-Business Suite with Oracle Real User Experience Insight Oracle Configuration & Compliance capabilities  Features that were previously available in the standalone management packs are now packaged in the Oracle E-Business Suite Plug-in, which is certified with Oracle Enterprise Manager 12c Cloud Control:  Functionality previously available for Application Management Pack (AMP) is now classified as “System Management for Oracle E-Business Suite” within the plug-in. Functionality previously available for Application Change Management Pack (ACMP) is now classified as “Change Management for Oracle E-Business Suite” within the plug-in. The Application Configuration Console and the Configuration Change Console are now native components of Oracle Enterprise Manager 12c. System Management Enhancements General Oracle Enterprise Manager 12c Base Platform uptake: All components of the management suite are certified with Oracle Enterprise Manager 12c Cloud Control. Security Privilege Delegation: The Oracle E-Business Suite Plug-in now extends Enterprise Manager’s privilege delegation through Sudo and PowerBroker to Oracle E-Business Suite Plug-in host targets. Privileges and Roles for Managing Oracle E-Business Suite: This release includes new ready-to-use target and resource privileges to monitor, manage, and perform Change Management functionality. Cloning Named Credentials Uptake in Cloning: The Clone module transactions now let users leverage the Named Credential feature introduced in Enterprise Manager 12c, thereby passing all the benefits of Named Credentials features in Enterprise Manager to the Oracle E-Business Suite Plug-in users. Smart Clone improvements: In addition to the existing 11i support that was available on previous releases, the new Oracle E-Business Suite Plug-in widens the coverage supporting Oracle E-Business Suite releases 12.0.x and 12.1.x. The new and improved Smart Clone UI supports the adding of "pre and post" custom steps to a copy of the ready-to-use cloning deployment procedure. Now a user can pass parameters to the custom steps through the interview screen of the UI as well as pass ready-to-use parameters to the custom steps. Additional configuration enhancements are included for configuring RAC targets databases, such as the ability to customize listener names and the option to configure with Virtual IP or Scan IP. Change Management Enhancements Customization Manager Support for longer file names: Customization Manager now handles file names up to thirty characters in length. Patch Manager Queuing of Patch Manager Runs: This feature allows patch runs to queue up if Patch Manager detects a specific target is in a blackout state. Multi-node system patching: The patch run interview has been enhanced to allow Enterprise Manager Administrator to choose which nodes adpatch will run on. New AD Administration Options: The patch run interview has been extended to include AD Administration Options "Relink Application Programs", "Generate Product Jars Files", "Generate Report Files", and "Generate Form Files". Downloads Fresh install For new customers or existing customers wishing to perform a fresh install Enterprise Manager Store (within Enterprise Manager 12c) Oracle Software Delivery Cloud Upgrades For existing customers wishing to upgrade their AMP 4.0 or AMP 3.1 installations Oracle Technology Network Getting Started with Oracle E-Business Suite Plug-In, Release 12.1.0.1 (Note 1434392.1) Prerequisites Enterprise Manager Cloud Control 12cOne or more of the following Oracle E-Business Suite Releases Release 11.5.10 CU2 with 11i.ATG_PF.H.RUP6 or higher Release 12.0.4 with R12.ATG_PF.A.delta.6 Release 12.1 with R12.ATG_PF.B.delta.3 Platforms and OS Release certification information is available from My Oracle Support via the Certification page. Search for "Oracle Application Management Pack for Oracle E-Business Suite and release 12.1.0.1.0." Related Articles Oracle E-Business Suite Plug-in 4.0 Released for OEM 11g (11.1.0.1)

    Read the article

  • General website publishing questions involving domain forwarding issue

    - by Gorgeousyousuf
    Even though I have been having a certain level of knowledge and experience about web development I have never interested in obtaining a domain and publishing a website from my own server. Since today I have been struggling with getting my own domain and configuring it utilizing web sources. I started with learning the outline of web publishing process including web server installation, deploying a website for testing purpose,router port forwarding, getting a domain and forwarding domain to my router which will also forward http requests to my web server I am confused about some parts and so far could not get the web site accessed from outside of the network. All I try to do is just for learning purpose so I do not pay much attention to security issues for now. I have Server 2008 and IIS 7.5 installed. I use a laptop and have access to the modem over wireless and my modem is Zoom x6 5590. Well I will continue explaining what I have done so far and what I think will be after each action I did, I have successfully had access to my website on any local computer entering the internal ip address and port pair of the host machine in a browser. Next, I forwarded port 80 of my host machine creating a virtual server like 10.0.0.x(internal ip(static) of the host) - tcp - start port : 80 - end port : 80 in router options. Now I suppose every request that will come to the public Ip on port 80 will be forwarded to my host machine(10.0.0.x) over port 80. So If everyhing went as desired, the website listening on port 80 will accept the request and process the issue and finally respond bla bla bla... I suppose to access my website from outside of the network by entering http://MyPublicIp:80 in a browser but I couldn't accomplish this task by now despite using godady's domain forwarding tool,I see a small view of my website when I click the "preview" button that checks whether the address(http://publicip/Index.aspx) I entered where my domain will be forwarded is available or not. I am sure that configuring domain does not play a role in solving such a problem since using public ip and port matching does not help. So here is the first question, What is the fact that I face this problem? After that, I have couple of question regarding domain forwarding using godaddy tool. Can I forward my domain to a any port for example port 8080 other than default http port 80? Additionally, can I use a sub-domain to forward to a different port of the host? What I want to design is if the client enters www.mydomain.com, website1 will respond over a specified port and after when a client enters info.mydomain.com, another website which listens on different port will respond. I tried to add a sub-domain and forward it to a address like http://www.mydomain.com:8080/Index.aspx with no success. Can I really do that? Finally, what if I have a ftp site listening on the default port 21 and I create a domain like ftp.mydomain.com that will forward to that ftp site address. Is it possible to use sub-domains for ftp site access? I know I am more than confused but no matter whatever and however you reply to me, you will help me have a more clear view on this subject. Thank you very much from now.

    Read the article

  • SQL Developer Data Modeler v3.3 Early Adopter: Search

    - by thatjeffsmith
    photo: Stuck in Customs via photopin cc The next version of Oracle SQL Developer Data Modeler is now available as an Early Adopter (read, beta) release. There are many new major feature enhancements to talk about, but today’s focus will be on the brand new Search mechanism. Data, data, data – SO MUCH data Google has made countless billions of dollars around a very efficient and intelligent search business. People have become accustomed to having their data accessible AND searchable. Data models can have thousands of entities or tables, each having dozens of attributes or columns. Imagine how hard it could be to find what you’re looking for here. This is the challenge we have tackled head-on in v3.3. Same location as the Search toolbar in Oracle SQL Developer (and most web browsers) Here’s how it works: Search as you type – wicked fast as the entire model is loaded into memory Supports regular expressions (regex) Results loaded to a new panel below Search across designs, models Search EVERYTHING, or filter by type Save your frequent searches Save your search results as a report Open common properties of object in search results and edit basic properties on-the-fly Want to just watch the video? We have a new Oracle Learning Library resource available now which introduces the new and improved Search mechanism in SQL Developer Data Modeler. Go watch the video and then come back. Some Screenshots This will be a pretty easy feature to pick up. Search is intuitive – we’ve already learned how to do search. Now we just have a better interface for it in SQL Developer Data Modeler. But just in case you need a couple of pointers… The SYS data dictionary in model form with Search Results If I type ‘translation’ in the search dialog, then the results will come up as hits are ‘resolved.’ By default, everything is searched, although I can filter the results after-the-fact. You can see where the search finds a match in the ‘Content’ column Save the Results as a Report If you limit the search results to a category and a model, then you can save the results as a report. All of the usual suspects You can optionally include the search string, which displays in the top of of the report as ‘PATTERN.’ You can save you common reporting setups as a template and reuse those as well. Here’s a sample HTML report: Yes, I like to search my search results report! Two More Ways to Search You can search ‘in context’ by opening the ‘Find’ dialog from an active design. You can do this using the ‘Search’ toolbar button or from a model context menu. Searching a specific model Instead of bringing up the old modal Find dialog, you now get to use the new and improved Search panel. Notice there’s no ‘Model’ drop-down to select and that the active Search form is now in the Search panel versus the search toolbar up top. What else is new in SQL Developer Data Modeler version 3.3? All kinds of goodies. You can send your model to Excel for quick edits/reviews and suck the changes back into your model, you can share objects between models, and much much more. You’ll find new videos and blog posts on the subject in the new few days and weeks. Enjoy! If you have any feedback or want to report bugs, please visit our forums.

    Read the article

  • Ubuntu 12.04 Install - Black Screen - No Grub - Have run Boot Repair Disk

    - by Pat
    Day 4 of my purgatory. History: Had problems with live CD at first, had to set the "nomodeset" option ... and then it worked fine. Installed Ubuntu "Alongside" Windows XP from live CD (NOT wubi) Upon reboot after installation, I get the BIOS ... and then a black screen. If I hit shift after the BIOS screen I get text that says "loading GRUB ..", but then no GRUB ... just a black screen. What I have tried to do: Total re-installation ... 3 times now. Also tried with wubi, same black screen. Have gone back to the normal (non-wubi) install. After installation, I tried re-booting the live cd ... and trying to change GRUB file using: sudo gedit /etc/default/grub ... to "nomodeset" and "timeout=10" ... but won't let me save my changes because I'm using the live cd "in memory" system and don't have permissions to the disks (I think). I tried logging in ... but it won't let me. I then read many posts on this site. I'm stuck. This morning, I ran the "boot repair disk". Results here: http://paste.ubuntu.com/1132333/ What I think is wrong: Since I can get the live CD to run (perfectly) with the "nomodeset" option, I think all I need to do is get to GRUB to change that ... but I can't get to GRUB. Appreciate any advice. Pat Day 5 ... I downloaded "Super Grub 2 Disk" from: http://www.supergrubdisk.org/super-grub2-disk/ This looks promising. I can boot the disk and it brings me to a GRUB program that allows me to: 1) Boot to Windows ... which works 2) Boot to Ubuntu ... which does NOT work When I choose boot to Ubuntu, I get lines across the screen which is an obvious video card problem. Likely because I need to set the "nomodeset" option. So, I attempted to use super grub2 to edit the grub file ... but it is TOTALLY different than the Ubuntu grub file ... and I don't know where to put the "nomodeset" option. Still stuck ... The bottom line is that: 1) I need to edit /etc/default/grub on sda(1) ... which is my boot drive ... to add the "nomodeset" option 2) To do that I need to get into grub ... but, I can't. Holding down shift just echo's "loading grub .." and then takes me to a black screen 3) I can boot to the live CD by setting nomodeset .... but I cannot access the hard disk as root ... I can't save my changes! Can anyone tell me how to login as root for the filesystem from the live CD ... so I can edit the grub file on the HARD DISK ... and then run update-grub??

    Read the article

  • Microsoft Cloud Day - the ups and downs

    - by Charles Young
    The term ‘cloud’ can sometimes obscure the obvious.  Today’s Microsoft Cloud Day conference in London provided a good example.  Scott Guthrie was halfway through what was an excellent keynote when he lost network connectivity.  This proved very disruptive to his presentation which centred on a series of demonstrations of the Azure platform in action.  Great efforts were made to find a solution, but no quick fix presented itself.  The venue’s IT facilities were dreadful – no WiFi, poor 3G reception (forget 4G…this is the UK) and, unbelievably, no-one on hand from the venue staff to help with infrastructure issues.  Eventually, after an unscheduled break, a solution was found and Scott managed to complete his demonstrations.  Further connectivity issues occurred during the day. I can say that the cause was prosaic.  A member of the venue staff had interfered with a patch board and inadvertently disconnected Scott Guthrie’s machine from the network by pulling out a cable. I need to state the obvious here.  If your PC is disconnected from the network it can’t communicate with other systems.  This could include a machine under someone’s desk, a mail server located down the hall, a server in the local data centre, an Internet search engine or even, heaven forbid, a role running on Azure. Inadvertently disconnecting a PC from the network does not imply a fundamental problem with the cloud or any specific cloud platform.  Some of the tweeted comments I’ve seen today are analogous to suggesting that, if you accidently unplug your microwave from the mains, this suggests some fundamental flaw with the electricity supply to your house.   This is poor reasoning, to say the least. As far as the conference was concerned, the connectivity issue in the keynote, coupled with some later problems in a couple of presentations, served to exaggerate the perception of poor organisation.   Software problems encountered before the conference prevented the correct set-up of a smartphone app intended to convey agenda information to attendees.  Although some information was available via this app, the organisers decided to print out an agenda at the last moment.  Unfortunately, the agenda sheet did not convey enough information, and attendees were forced to approach conference staff through the day to clarify locations of the various presentations. Despite these problems, the overwhelming feedback from conference attendees was very positive.  There was a real sense of excitement in the morning keynote.  For many, this was their first sight of new Azure features delivered in the ‘spring’ release.  The most common reaction I heard was amazement and appreciation that Azure’s new IaaS features deliver built-in template support for several flavours of Linux from day one.  This coupled with open source SDKs and several presentations on Azure’s support for Java, node.js, PHP, MongoDB and Hadoop served to communicate that the Azure platform is maturing quickly.  The new virtual network capabilities also surprised many attendees, and the much improved portal experience went down very well. So, despite some very irritating and disruptive problems, the event served its purpose well, communicating the breadth and depth of the newly upgraded Azure platform.  I enjoyed the day very much.

    Read the article

  • HPCM 11.1.2.x - Outline Optimisation for Calculation Performance

    - by Jane Story
    When an HPCM application is first created, it is likely that you will want to carry out some optimisation on the HPCM application’s Essbase outline in order to improve calculation execution times. There are several things that you may wish to consider. Because at least one dense dimension for an application is required to deploy from HPCM to Essbase, “Measures” and “AllocationType”, as the only required dimensions in an HPCM application, are created dense by default. However, for optimisation reasons, you may wish to consider changing this default dense/sparse configuration. In general, calculation scripts in HPCM execute best when they are targeting destinations with one or more dense dimensions. Therefore, consider your largest target stage i.e. the stage with the most assignment destinations and choose that as a dense dimension. When optimising an outline in this way, it is not possible to have a dense dimension in every target stage and so testing with the dense/sparse settings in every stage is the key to finding the best configuration for each individual application. It is not possible to change the dense/sparse setting of individual cloned dimensions from EPMA. When a dimension that is to be repeated in multiple stages, and therefore cloned, is defined in EPMA, every instance of that dimension has the same storage setting. However, such manual changes may not be preserved in all cases. Please see below for full explanation. However, once the application has been deployed from EPMA to HPCM and from HPCM to Essbase, it is possible to make the dense/sparse changes to a cloned dimension directly in Essbase. This can be done by editing the properties of the outline in Essbase Administration Services (EAS) and manually changing the dense/sparse settings of individual dimensions. There are two methods of deployment from HPCM to Essbase from 11.1.2.1. There is a “replace” deploy method and an “update” deploy method: “Replace” will delete the Essbase application and replace it. If this method is chosen, then any changes made directly on the Essbase outline will be lost. If you use the update deploy method (with or without archiving and reloading data), then the Essbase outline, including any manual changes you have made (i.e. changes to dense/sparse settings of the cloned dimensions), will be preserved. Notes If you are using the calculation optimisation technique mentioned in a previous blog to calculate multiple POVs (https://blogs.oracle.com/pa/entry/hpcm_11_1_2_optimising) and you are calculating all members of that POV dimension (e.g. all months in the Period dimension) then you could consider making that dimension dense. Always review Block sizes after all changes! The maximum block size recommended in the Essbase Database Administrator’s Guide is 100k for 32 bit Essbase and 200k for 64 bit Essbase. However, calculations may perform better with a larger than recommended block size provided that sufficient memory is available on the Essbase server. Test different configurations to determine the most optimal solution for your HPCM application. Please note that this blog article covers HPCM outline optimisation only. Additional performance tuning can be achieved by methodically testing database settings i.e data cache, index cache and/or commit block settings. For more information on Essbase tuning best practices, please review these items in the Essbase Database Administrators Guide. For additional information on the commit block setting, please see the previous PA blog article https://blogs.oracle.com/pa/entry/essbase_11_1_2_commit

    Read the article

  • Automount of external hard disk

    - by moose
    I have an Intenso 6002560 1TB Memory Station - an external hard disk. This hard disk gets connected via Y-USB cable. When I connect both USB-ends to my Notebook, it gets recognized by my Ubuntu 10.04.4 LTS system: moose@pc07:~$ lsusb [...] Bus 002 Device 005: ID 13fd:1840 Initio Corporation [...] and Disk /dev/sda: 320.1 GB, 320072933376 bytes 255 heads, 63 sectors/track, 38913 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00065e10 Device Boot Start End Blocks Id System /dev/sda1 * 1 37810 303704064 83 Linux /dev/sda2 37810 38914 8864769 5 Extended /dev/sda5 37810 38914 8864768 82 Linux swap / Solaris Disk /dev/sdc: 1000.2 GB, 1000204886016 bytes 255 heads, 63 sectors/track, 121601 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x0d6ea32a Device Boot Start End Blocks Id System /dev/sdc1 1 121601 976759008+ c W95 FAT32 (LBA) But it did not get mounted: moose@pc07:/dev$ mount -l /dev/sda1 on / type ext4 (rw,errors=remount-ro,user_xattr) proc on /proc type proc (rw,noexec,nosuid,nodev) none on /sys type sysfs (rw,noexec,nosuid,nodev) none on /sys/fs/fuse/connections type fusectl (rw) none on /sys/kernel/debug type debugfs (rw) none on /sys/kernel/security type securityfs (rw) none on /dev type devtmpfs (rw,mode=0755) none on /dev/pts type devpts (rw,noexec,nosuid,gid=5,mode=0620) none on /dev/shm type tmpfs (rw,nosuid,nodev) none on /var/run type tmpfs (rw,nosuid,mode=0755) none on /var/lock type tmpfs (rw,noexec,nosuid,nodev) none on /lib/init/rw type tmpfs (rw,nosuid,mode=0755) binfmt_misc on /proc/sys/fs/binfmt_misc type binfmt_misc (rw,noexec,nosuid,nodev) gvfs-fuse-daemon on /home/moose/.gvfs type fuse.gvfs-fuse-daemon (rw,nosuid,nodev,user=moose) However, I could mount it manually with mount -t vfat /dev/sdc1 /mnt/sdc1 as you can see here: moose@pc07:~$ mount -l /dev/sda1 on / type ext4 (rw,errors=remount-ro,user_xattr) proc on /proc type proc (rw,noexec,nosuid,nodev) none on /sys type sysfs (rw,noexec,nosuid,nodev) none on /sys/fs/fuse/connections type fusectl (rw) none on /sys/kernel/debug type debugfs (rw) none on /sys/kernel/security type securityfs (rw) none on /dev type devtmpfs (rw,mode=0755) none on /dev/pts type devpts (rw,noexec,nosuid,gid=5,mode=0620) none on /dev/shm type tmpfs (rw,nosuid,nodev) none on /var/run type tmpfs (rw,nosuid,mode=0755) none on /var/lock type tmpfs (rw,noexec,nosuid,nodev) none on /lib/init/rw type tmpfs (rw,nosuid,mode=0755) binfmt_misc on /proc/sys/fs/binfmt_misc type binfmt_misc (rw,noexec,nosuid,nodev) gvfs-fuse-daemon on /home/moose/.gvfs type fuse.gvfs-fuse-daemon (rw,nosuid,nodev,user=moose) /dev/sdc1 on /mnt/sdc1 type vfat (rw) edit: Another command: moose@pc07:~$ sudo blkid -o list device fs_type label mount point UUID ---------------------------------------------------------------------------------------------------------------------- /dev/sda1 ext4 / 45eb611b-517e-425b-8057-0391726cccd5 /dev/sda5 swap <swap> e9dc42f3-594c-4b62-874a-305eda5eed41 moose@pc07:~$ blkid -o list device fs_type label mount point UUID ---------------------------------------------------------------------------------------------------------------------- /dev/sda1 ext4 / 45eb611b-517e-425b-8057-0391726cccd5 /dev/sda5 swap <swap> e9dc42f3-594c-4b62-874a-305eda5eed41 /dev/sdc1 /mnt/sdc1 edit: another command: moose@pc07:~$ ls -l /dev/disk/by-uuid/ total 0 lrwxrwxrwx 1 root root 10 2012-09-30 09:31 45eb611b-517e-425b-8057-0391726cccd5 -> ../../sda1 lrwxrwxrwx 1 root root 10 2012-09-30 09:31 e9dc42f3-594c-4b62-874a-305eda5eed41 -> ../../sda5 Here is a link to a Launchpad question about this problem. But I would like it to mount automatically. What do I have to do?

    Read the article

  • SPARC T4-2 Produces World Record Oracle Essbase Aggregate Storage Benchmark Result

    - by Brian
    Significance of Results Oracle's SPARC T4-2 server configured with a Sun Storage F5100 Flash Array and running Oracle Solaris 10 with Oracle Database 11g has achieved exceptional performance for the Oracle Essbase Aggregate Storage Option benchmark. The benchmark has upwards of 1 billion records, 15 dimensions and millions of members. Oracle Essbase is a multi-dimensional online analytical processing (OLAP) server and is well-suited to work well with SPARC T4 servers. The SPARC T4-2 server (2 cpus) running Oracle Essbase 11.1.2.2.100 outperformed the previous published results on Oracle's SPARC Enterprise M5000 server (4 cpus) with Oracle Essbase 11.1.1.3 on Oracle Solaris 10 by 80%, 32% and 2x performance improvement on Data Loading, Default Aggregation and Usage Based Aggregation, respectively. The SPARC T4-2 server with Sun Storage F5100 Flash Array and Oracle Essbase running on Oracle Solaris 10 achieves sub-second query response times for 20,000 users in a 15 dimension database. The SPARC T4-2 server configured with Oracle Essbase was able to aggregate and store values in the database for a 15 dimension cube in 398 minutes with 16 threads and in 484 minutes with 8 threads. The Sun Storage F5100 Flash Array provides more than a 20% improvement out-of-the-box compared to a mid-size fiber channel disk array for default aggregation and user-based aggregation. The Sun Storage F5100 Flash Array with Oracle Essbase provides the best combination for large Oracle Essbase databases leveraging Oracle Solaris ZFS and taking advantage of high bandwidth for faster load and aggregation. Oracle Fusion Middleware provides a family of complete, integrated, hot pluggable and best-of-breed products known for enabling enterprise customers to create and run agile and intelligent business applications. Oracle Essbase's performance demonstrates why so many customers rely on Oracle Fusion Middleware as their foundation for innovation. Performance Landscape System Data Size(millions of items) Database Load(minutes) Default Aggregation(minutes) Usage Based Aggregation(minutes) SPARC T4-2, 2 x SPARC T4 2.85 GHz 1000 149 398* 55 Sun M5000, 4 x SPARC64 VII 2.53 GHz 1000 269 526 115 Sun M5000, 4 x SPARC64 VII 2.4 GHz 400 120 448 18 * – 398 mins with CALCPARALLEL set to 16; 484 mins with CALCPARALLEL threads set to 8 Configuration Summary Hardware Configuration: 1 x SPARC T4-2 2 x 2.85 GHz SPARC T4 processors 128 GB memory 2 x 300 GB 10000 RPM SAS internal disks Storage Configuration: 1 x Sun Storage F5100 Flash Array 40 x 24 GB flash modules SAS HBA with 2 SAS channels Data Storage Scheme Striped - RAID 0 Oracle Solaris ZFS Software Configuration: Oracle Solaris 10 8/11 Installer V 11.1.2.2.100 Oracle Essbase Client v 11.1.2.2.100 Oracle Essbase v 11.1.2.2.100 Oracle Essbase Administration services 64-bit Oracle Database 11g Release 2 (11.2.0.3) HP's Mercury Interactive QuickTest Professional 9.5.0 Benchmark Description The objective of the Oracle Essbase Aggregate Storage Option benchmark is to showcase the ability of Oracle Essbase to scale in terms of user population and data volume for large enterprise deployments. Typical administrative and end-user operations for OLAP applications were simulated to produce benchmark results. The benchmark test results include: Database Load: Time elapsed to build a database including outline and data load. Default Aggregation: Time elapsed to build aggregation. User Based Aggregation: Time elapsed of the aggregate views proposed as a result of tracked retrieval queries. Summary of the data used for this benchmark: 40 flat files, each of size 1.2 GB, 49.4 GB in total 10 million rows per file, 1 billion rows total 28 columns of data per row Database outline has 15 dimensions (five of them are attribute dimensions) Customer dimension has 13.3 million members 3 rule files Key Points and Best Practices The Sun Storage F5100 Flash Array has been used to accelerate the application performance. Setting data load threads (DLTHREADSPREPARE) to 64 and Load Buffer to 6 improved dataloading by about 9%. Factors influencing aggregation materialization performance are "Aggregate Storage Cache" and "Number of Threads" (CALCPARALLEL) for parallel view materialization. The optimal values for this workload on the SPARC T4-2 server were: Aggregate Storage Cache: 32 GB CALCPARALLEL: 16   See Also Oracle Essbase Aggregate Storage Option Benchmark on Oracle's SPARC T4-2 Server oracle.com Oracle Essbase oracle.com OTN SPARC T4-2 Server oracle.com OTN Oracle Solaris oracle.com OTN Oracle Database 11g Release 2 Enterprise Edition oracle.com OTN Disclosure Statement Copyright 2012, Oracle and/or its affiliates. All rights reserved. Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners. Results as of 28 August 2012.

    Read the article

< Previous Page | 769 770 771 772 773 774 775 776 777 778 779 780  | Next Page >