Search Results

Search found 38288 results on 1532 pages for 'oracle linux partners'.

Page 519/1532 | < Previous Page | 515 516 517 518 519 520 521 522 523 524 525 526  | Next Page >

  • Is that possible to route all mails sent to a mailbox to another server's mailbox

    - by Chau Chee Yang
    I have a Linux server that has local mail service. There are few user accounts on this server. User may send the mail to each other but that only restrict to LAN environment only. For example, I may # mail user1 to send mail to user1. User are not able to send mail to public. Some service like hylafax using this local mail service to send notification of fax status. I don't want to manage and maintain local mail service anymore. I have subscribed a package from ISP to host a public domain of my own. I wish to have my hylafax service to able to send the notification mails to public mail server, is that possible to do it? It is great if all mails that send to local mail server may forward to public mail server. That makes the local mail service serve mail forward only.

    Read the article

  • Is my Windows partition too far down on the disk?

    - by Trevor Alexander
    I have /boot/ on /dev/sda1 (1GB), followed by my Linux root LVM on /dev/sda2 (1.3GB). Finally, I installed Windows 7 on /dev/sda3 in the remaining 700GB of space. When I select Windows 7 in the grub menu, I get something like the following error and am thrown to grub4dos: find --set-root --ignore-floppies --ignore-cd /bootmgr Error 15: file not found Unable to locate necessary tables for adjustment. None of the options in grub4dos return anything but the above error. I heard that 1TB is the upper limit for locating Windows 7 partitions; is this true? How can I fix the above?

    Read the article

  • Making a Live Thumb drive boot with Persistent files, settings AND *drivers* that load on boot?

    - by Luke Stanley
    I have seen https://wiki.ubuntu.com/LiveUsbPendrivePersistent but it's a mess. What methods support persistent drivers as well as files and settings and don't screw up lifespan of the flash drive? I'd like to see your personal recommendations on say, Portable Linux, USB Creator, Remastersys + Unetbootin, etc Backstory: I have a Inspiron 1525 that's hard drive has been slowly dying. I want to switch to a live USB/CD/DVD system until I can get it repaired but my laptops internal wifi device requires a network connection by another means for Xubuntu to let it work, and then I have to enter my Wifi key again, and THEN I have to reinstall Skype etc... I'd be damned every time I have to shut the laptop down. I'm ok with making a shell script for installing apps and copying settings as required but a good persistent install should make this old hat and slow and it doesn't take care of drivers. The last time I tried making an ISO with Remastersys it didn't seem to copy all the required settings.

    Read the article

  • How can i resolve all external addresses to internal address?

    - by Darian
    I am currently setting up a Linux server for a WIFI access-point. When ever someone who is connected to the hotspot/access-point? tries to reload a page they get forced onto the one page. Note: this wont have internet access! ie: user tries accessing www.google.com = it returns 192.168.1.200 or example.domain I've read that "dnsmasq" can be used to redirect any external addresses to an internal address. but haven't had any luck. Anyone have an example of a config for "dnsmasq"? I have also read that this can be done through a proxy?

    Read the article

  • Partner Blog Series: PwC Perspectives - "Is It Time for an Upgrade?"

    - by Tanu Sood
    Is your organization debating their next step with regard to Identity Management? While all the stakeholders are well aware that the one-size-fits-all doesn’t apply to identity management, just as true is the fact that no two identity management implementations are alike. Oracle’s recent release of Identity Governance Suite 11g Release 2 has innovative features such as a customizable user interface, shopping cart style request catalog and more. However, only a close look at the use cases can help you determine if and when an upgrade to the latest R2 release makes sense for your organization. This post will describe a few of the situations that PwC has helped our clients work through. “Should I be considering an upgrade?” If your organization has an existing identity management implementation, the questions below are a good start to assessing your current solution to see if you need to begin planning for an upgrade: Does the current solution scale and meet your projected identity management needs? Does the current solution have a customer-friendly user interface? Are you completely meeting your compliance objectives? Are you still using spreadsheets? Does the current solution have the features you need? Is your total cost of ownership in line with well-performing similar sized companies in your industry? Can your organization support your existing Identity solution? Is your current product based solution well positioned to support your organization's tactical and strategic direction? Existing Oracle IDM Customers: Several existing Oracle clients are looking to move to R2 in 2013. If your organization is on Sun Identity Manager (SIM) or Oracle Identity Manager (OIM) and if your current assessment suggests that you need to upgrade, you should strongly consider OIM 11gR2. Oracle provides upgrade paths to Oracle Identity Manager 11gR2 from SIM 7.x / 8.x as well as Oracle Identity Manager 10g / 11gR1. The following are some of the considerations for migration: Check the end of product support (for Sun or legacy OIM) schedule There are several new features available in R2 (including common Helpdesk scenarios, profiling of disconnected applications, increased scalability, custom connectors, browser-based UI configurations, portability of configurations during future upgrades, etc) Cost of ownership (for SIM customers)\ Customizations that need to be maintained during the upgrade Time/Cost to migrate now vs. waiting for next version If you are already on an older version of Oracle Identity Manager and actively maintaining your support contract with Oracle, you might be eligible for a free upgrade to OIM 11gR2. Check with your Oracle sales rep for more details. Existing IDM infrastructure in place: In the past year and half, we have seen a surge in IDM upgrades from non-Oracle infrastructure to Oracle. If your organization is looking to improve the end-user experience related to identity management functions, the shopping cart style access request model and browser based personalization features may come in handy. Additionally, organizations that have a large number of applications that include ecommerce, LDAP stores, databases, UNIX systems, mainframes as well as a high frequency of user identity changes and access requests will value the high scalability of the OIM reconciliation and provisioning engine. Furthermore, we have seen our clients like OIM's out of the box (OOB) support for multiple authoritative sources. For organizations looking to integrate applications that do not have an exposed API, the Generic Technology Connector framework supported by OIM will be helpful in quickly generating custom connector using OOB wizard. Similarly, organizations in need of not only flexible on-boarding of disconnected applications but also strict access management to these applications using approval flows will find the flexible disconnected application profiling feature an extremely useful tool that provides a high degree of time savings. Organizations looking to develop custom connectors for home grown or industry specific applications will likewise find that the Identity Connector Framework support in OIM allows them to build and test a custom connector independently before integrating it with OIM. Lastly, most of our clients considering an upgrade to OIM 11gR2 have also expressed interest in the browser based configuration feature that allows an administrator to quickly customize the user interface without adding any custom code. Better yet, code customizations, if any, made to the product are portable across the future upgrades which, is viewed as a big time and money saver by most of our clients. Below are some upgrade methodologies we adopt based on client priorities and the scale of implementation. For illustration purposes, we have assumed that the client is currently on Oracle Waveset (formerly Sun Identity Manager).   Integrated Deployment: The integrated deployment is typically where a client wants to split the implementation to where their current IDM is continuing to handle the front end workflows and OIM takes over the back office operations incrementally. Once all the back office operations are moved completely to OIM, the front end workflows are migrated to OIM. Parallel Deployment: This deployment is typically done where there can be a distinct line drawn between which functionality the platforms are supporting. For example the current IDM implementation is handling the password reset functionality while OIM takes over the access provisioning and RBAC functions. Cutover Deployment: A cutover deployment is typically recommended where a client has smaller less complex implementations and it makes sense to leverage the migration tools to move them over immediately. What does this mean for YOU? There are many variables to consider when making upgrade decisions. For most customers, there is no ‘easy’ button. Organizations looking to upgrade or considering a new vendor should start by doing a mapping of their requirements with product features. The recommended approach is to take stock of both the short term and long term objectives, understand product features, future roadmap, maturity and level of commitment from the R&D and build the implementation plan accordingly. As we said, in the beginning, there is no one-size-fits-all with Identity Management. So, arm yourself with the knowledge, engage in industry discussions, bring in business stakeholders and start building your implementation roadmap. In the next post we will discuss the best practices on R2 implementations. We will be covering the Do's and Don't's and share our thoughts on making implementations successful. Meet the Writers: Dharma Padala is a Director in the Advisory Security practice within PwC.  He has been implementing medium to large scale Identity Management solutions across multiple industries including utility, health care, entertainment, retail and financial sectors.   Dharma has 14 years of experience in delivering IT solutions out of which he has been implementing Identity Management solutions for the past 8 years. Scott MacDonald is a Director in the Advisory Security practice within PwC.  He has consulted for several clients across multiple industries including financial services, health care, automotive and retail.   Scott has 10 years of experience in delivering Identity Management solutions. John Misczak is a member of the Advisory Security practice within PwC.  He has experience implementing multiple Identity and Access Management solutions, specializing in Oracle Identity Manager and Business Process Engineering Language (BPEL). Praveen Krishna is a Manager in the Advisory Security practice within PwC.  Over the last decade Praveen has helped clients plan, architect and implement Oracle identity solutions across diverse industries.  His experience includes delivering security across diverse topics like network, infrastructure, application and data where he brings a holistic point of view to problem solving. Jenny (Xiao) Zhang is a member of the Advisory Security practice within PwC.  She has consulted across multiple industries including financial services, entertainment and retail. Jenny has three years of experience in delivering IT solutions out of which she has been implementing Identity Management solutions for the past one and a half years.

    Read the article

  • How can I tell if ZFS (zfs-fuse) dedup/compression is applied to a particular file?

    - by asari
    I have a zfs formatted partition using zfs-fuse for linux (Ubuntu). I had used it for a while, and then enabled dedup and compression on it (zfs set compression=on/dedup=on). Now I think I have some files that are dedup'ed and compressed, and file that are not yet. It was OK, but sometimes I was confused. Let's see, following command would consume almost 4GB of my zfs storage: cp oldfile.4GB newfile.4GB .. and this would consume almost zero: cp newfile.4GB newfile.4GB.2 This is because the old file is not yet compressed, so dedup not happened, I think. My idea is -- if I can find old files that are not yet dedup/compressed, I can perform batch copy/rename/remove them to eliminate duplicity and redundancy. But how I can check that? I know I can re-copy whole contents of my storage should work (even better with checking the time stamp of each file), but I'd be happier if I have zfsstat-like tool that shows some file properties.

    Read the article

  • Can't find my.cnf on my VPS

    - by dan
    Ok i am a total noob when it comes to servers (but eager to learn). I am renting a VPS so i can host a magento store. The VPS is using Centos5 and DirectAdmin and XEN virtualization. I've read a bit about how to optimize magento and one suggestion is to edit 'my.cnf'. However i can't find this file anywhere from within DirectAdmin. I also can't connect to the VPS via console as my host has a console access via their website but it won't let me enter my root password it just hangs...(how do people normally connected to their linux VPS?) Please help? ThankYou.

    Read the article

  • links for 2010-04-22

    - by Bob Rhubart
    Barry N. Perkins: Unique Business Value vs. Unique IT "Some solutions may look good today, solving a budget challenge by reducing cost, or solving a specific tactical challenge, but result in highly complex environments, that may be difficult to manage and maintain and limit the future potential of your business. Put differently, some solutions might push today's challenge into the future, resulting in a more complex and expensive solution." -- Barry N. Perkins, VP Oracle Modernization & Oracle Integrated Solutions (tags: oracle otn enterprisearchitecture modernization) Paul Homchick: The Information Driven Value Chain - Part 2 Paul Homchick continues his series with a look "at the way investments have been made in enterprise software in an effort to create and manage value, and how systems are moving from a controlled-process approach design towards gathering and using dynamically using information." (tags: oracle otn enterprisearchitecture) @vambenepe: The battle of the Cloud Frameworks: Application Servers redux? "The battle of the Cloud Frameworks has started," says William Vambenepe, "and it will look a lot like the battle of the Application Servers which played out over the last decade and a half." (tags: oracle otn cloud frameworks appserver) @ORACLENERD: COLLABORATE: Day 4 Wrap Up Oraclenerd feesses up: "The day started out with the realization that I pulled off the best (COLLABORATE - self annointed) prank ever. Twitter was...all atwitter about the fact that Mark Rittman was Oracle's Person of the Year. Of course it wasn't true. If you look at the picture, you'll realize that he's wearing exactly the same clothes in the magazine cover as he is in real life." (tags: collaborate2010 oracleace) Oracle's Hal Stern at Cloud Expo: "We've Moved from 'What' to 'How'" | Cloud Computing Journal "Hal also spoke a bit about building 'a sustainable IT model.' By this, he said he didn't mean the various Green IT and similar efforts that 'are all about data center efficiency. I think the operational model is just as important. Many enterprises are managing a tremendous amount of complexity, and it's hard to make this sustainable.'" -- Cloud News Desk (tags: oracle cloud cloudexpo halstern) @ORACLENERD: COLLABORATE: The Beach Party "Then tiki statues somehow were incorporated into various dances" -- Oracle ACE Chet "oraclenerd" Justice (tags: 0racle otn oracleace collaborate2010 oaug ioug lasvegas) David Andrews: Collaborate Day Two "Collaborate 2010 has focused on helping attendees understand what is already available and how to make more effective use of it. This does not sound exciting but it is extremely valuable. Most customers use only a small fraction of the capability of the products they already own. Helping them understand all the additional things they could be doing without buying anything more is very valuable." -- David Andrews (tags: oracle oaug collaborate2010 ioug)

    Read the article

  • Bridge virtual machines out WLAN interface

    - by Thomas
    It seems that my wlan card (intel 5100 AGN) firmware doesn't allow "spoofing" MAC addresses. This has the side effect of destroying the capability to bridge out my virtual machines on that interface. Apparently this is a common thing on wlan cards. I can see the incoming traffic just fine in my virtual machines, but their DHCP queries don't get bridged out of the WLAN card. It works perfectly well when using the wired ethernet port. Is there a workaround for this? MAC-NAT or something? I don't want to route my virtual machines out to the Internet because I don't want my host OS to even have an IP address. I'm using Linux and KVM for virtualization.

    Read the article

  • For virtual machines, when SMP is available on the host, should guest also have SMP setup?

    - by supercheetah
    I'm trying to find out the best "bang for my buck" so to speak in regards to virtual machines, and SMP. I have an Intel Core 2 Duo, which of course has two cores and the VT extensions, and I'm running Ubuntu Linux (host) on it with VirtualBox, which has Windows Vista (guest). Currently I've got the guest machine setup for two processors to give Windows a chance to manage its own parallelism, but I'm not certain that it's any faster. I've tried it with just one processor, but it's hard to tell if it's any better. Any thoughts? Should the guest have two processors setup?

    Read the article

  • Calculate data transferred in a local LAN

    - by ramdaz
    How do you calculate the data flown between a computer and the gateway computer. I have a Linux router/gateway running IP Tables which routes internet traffic in a LAN. I have individual users with IP/MAC Address mapped who access Interet through the gateway computer. I would like to find out the traffic utilized by individual users. Is it possible for us to find out what kind of traffic was HTTP, SMTP, FTP etc. Is it also possible to pool the information on hourly basis, and get specific info so that I can store information in a database? I have heard of IP Accounting? Is that the right way

    Read the article

  • Nearest PC equivalent to Mac Target Disk Mode?

    - by username
    Mac firmware has a special boot mode that allows you to offer its internal hdd to another computer as an external disk (you just connect the two machines via an IEEE 1394 cable). Only the second machine needs a functioning OS installed. Any good suggestions for something similar on the PC side of things? Block level access isn't important to me, I'd just like to be able to copy files off it. It doesn't matter to me if it uses Ethernet, IEEE 1394, or wifi - I just like having a quick way to access files on a client PC. Is there any single-purpose Linux distro specially designed to do this? It'd be nice to have something super simple, quickbooting, and small that I could install on a USB drive. I used to use Knoppix, but it's overkill as a Target Mode replacement.

    Read the article

  • DPMS, keep screen off when lid shut

    - by Evan Teran
    I have a laptop running linux. In my xorg configuration, I have DPMS setup so that the screen automatically turns off during several events. In addition to that I have to the following script tied to ACPI lid open/close events: #!/bin/sh for i in $(pidof X); do CMD=$(ps --no-heading $i) XAUTH="$(echo $CMD | sed -n 's/.*-auth \(.*\)/\1/p')" DISPLAY="$(echo $CMD | sed -n 's/.* \(:[0-9]\) .*/\1/p')" # turn the display off or back on export XAUTHORITY=$XAUTH /usr/bin/xset -display $DISPLAY dpms force $1 done Basically, this script takes one parameter ("on" or "off") then iterates through all of my running X sessions and either turns on or turns off the monitor. Here's my issue. When I close the lid of the laptop, the screen goes off as expected, but if a mouse event occurs (like if something bumps into the table...) then the screen turns back on even though it is closed (I can see the light through the side of the laptop). Is there a way to prevent the screen from turning on during a mouse event if the lid is closed?

    Read the article

  • Which FLOSS text editor is most like kwrite without being KDE-based?

    - by darenw
    Among text editors on Linux, I usually prefer KWrite. I like that I can quickly turn on/off line numbers and line wrap in the View menu. Other settings are easy to change. Other text editors I've used in the past, such as Gnome's gedit, bury line numbering and wrapping checkboxes deeper into the menu system, making it more distracting to change while concentrating on real work. However, KWrite is a KDE app. On Ubuntu it drags in over a dozen other packages, which I suspect I don't really need. Why would a text editor need all that? It's slower to start up than some other editors I've tried. I'm also trying to run an all-gnome system w/o any KDE, just to see how far I get with it. So, what GUI text editor isn't KDE-based, has few dependencies and quick start-up, easy to change line wrap and numbering, and general similarity to KWrite? What comes closest?

    Read the article

  • Unable to write into character device file in Ubuntu

    - by Surjya Narayana Padhi
    I just written a linux character driver. I created one character device file named X. I can see that file in /dev folder. Now I want to do some read/write operation into this file. I opened the filed in VI editor and write some text into it. I used :wq and exited. It didn't show any error. Now when I do cat on that same file I am not able to see any content. I tried it several times. The same situation. Please let me know If I am doing something wrong....

    Read the article

  • Ask a DNS server what sites it hosts - and how to possibly prevent misuse

    - by Exit
    I've got a server which I host my company website as well as some of my clients. I noticed a domain which I created, but never used, was being attacked by a poke and hope hacker. I imagine that the hacker collected the domain from either hitting my DNS server and requesting what domains are hosted. So, in the interest of prevention and better server management, how would I ask my own DNS server (Linux CentOS 4) what sites are being hosted on it? Also, is there a way to prevent these types of attacks by hiding this information? I would assume that DNS servers would need to keep some information public, but I'm not sure if there is something that most hosts do to help prevent these bandwidth wasting poke and hope attacks. Thanks in advance.

    Read the article

  • Case Management API by Koen van Dijk

    - by JuergenKress
    Case Management is a new addition to Oracle BPM in release 11.1.1.1.7 (PS6). This new release contains the Case Management engine, see blog Léon at  http://leonsmiers.blogspot.nl/ for more details.  However, currently this release does not contain a case portal. The case management API's, just like the already existing Oracle BPM API's, help in developing a portal page with relative ease. This blog will use some real life examples from the EURent casemanagement application and portal application developed by Oracle. The Oracle BPM Case Management API is a Java Based API that enables developers to programmatically access the new Case Management functionalities. It is an elaborate API that can access all the functionalities of Oracle Case Management. I will describe two of those functionalities in this blog: retrieving case data as DOM (http://www.w3.org/DOM/) and attaching a document to a case. Libraries First of all when creating a Case Management project you will need to attach the following libraries: These contain all the classes that are in the Case Management API. Service client To do anything with the BPM CaseManagement API in general it is necessary to create a CaseManagementServiceClient Object. The Case Management service client is the central piece of the Case Management API. It can be used to retrieve two different types of services. The first is the case stream service and the case service. The case stream service contains functionality to upload and download documents to and from a case. The second one is the CaseService. This service contains all the other functionality acting upon a case including but not limited to: Read the complete article here. SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Facebook Wiki Technorati Tags: ACM API,adaptive Case Management,BPM,SOA Community,Oracle SOA,Oracle BPM,Community,OPN,Jürgen Kress

    Read the article

  • How difficult is it to setup a mailserver?

    - by Jacob R
    I want a secure mail solution, as I am looking to move away from Google and other parties looking into my private data. How much of a PITA is it to setup my own mailserver? Should I go for an external provider with a good privacy policy and encrypted data instead? I have a VPS running Debian (with a dedicated IP + reverse DNS), and I'm a fairly capable Linux administrator, having setup a couple of webservers, home networks, and looking over the shoulder of sysadmins at work. The security I currently have on the VPS is limited to iptables and installing/running the bare minimum of what I need (currently basically irssi and lighttpd). When setting up a mail server, is there a lot of stuff to take into consideration? Will my outgoing mail be marked as spam on other servers if I don't implement a number of solutions? Will reliable spam filtering be difficult to setup? Can I easily encrypt the stored mail?

    Read the article

  • The SPARC SuperCluster

    - by Karoly Vegh
    Oracle has been providing a lead in the Engineered Systems business for quite a while now, in accordance with the motto "Hardware and Software Engineered to Work Together." Indeed it is hard to find a better definition of these systems.  Allow me to summarize the idea. It is:  Build a compute platform optimized to run your technologies Develop application aware, intelligently caching storage components Take an impressively fast network technology interconnecting it with the compute nodes Tune the application to scale with the nodes to yet unseen performance Reduce the amount of data moving via compression Provide this all in a pre-integrated single product with a single-pane management interface All these ideas have been around in IT for quite some time now. The real Oracle advantage is adding the last one to put these all together. Oracle has built quite a portfolio of Engineered Systems, to run its technologies - and run those like they never ran before. In this post I'll focus on one of them that serves as a consolidation demigod, a multi-purpose engineered system.  As you probably have guessed, I am talking about the SPARC SuperCluster. It has many great features inherited from its predecessors, and it adds several new ones. Allow me to pick out and elaborate about some of the most interesting ones from a technological point of view.  I. It is the SPARC SuperCluster T4-4. That is, as compute nodes, it includes SPARC T4-4 servers that we learned to appreciate and respect for their features: The SPARC T4 CPUs: Each CPU has 8 cores, each core runs 8 threads. The SPARC T4-4 servers have 4 sockets. That is, a single compute node can in parallel, simultaneously  execute 256 threads. Now, a full-rack SPARC SuperCluster has 4 of these servers on board. Remember the keyword demigod.  While retaining the forerunner SPARC T3's exceptional throughput, the SPARC T4 CPUs raise the bar with single performance too - a humble 5x better one than their ancestors.  actually, the SPARC T4 CPU cores run in both single-threaded and multi-threaded mode, and switch between these two on-the-fly, fulfilling not only single-threaded OR multi-threaded applications' needs, but even mixed requirements (like in database workloads!). Data security, anyone? Every SPARC T4 CPU core has a built-in encryption engine, that is, encryption algorithms cast into silicon.  A PCI controller right on the chip for customers who need I/O performance.  Built-in, no-cost Virtualization:  Oracle VM for SPARC (the former LDoms or Logical Domains) is not a server-emulation virtualization technology but rather a serverpartitioning one, the hypervisor runs in the server firmware, and all the VMs' HW resources (I/O, CPU, memory) are accessed natively, without performance overhead.  This enables customers to run a number of Solaris 10 and Solaris 11 VMs separated, independent of each other within a physical server II. For Database performance, it includes Exadata Storage Cells - one of the main reasons why the Exadata Database Machine performs at diabolic speed. What makes them important? They provide DB backend storage for your Oracle Databases to run on the SPARC SuperCluster, that is what they are built and tuned for DB performance.  These storage cells are SQL-aware.  That is, if a SPARC T4 database compute node executes a query, it doesn't simply request tons of raw datablocks from the storage, filters the received data, and throws away most of it where the statement doesn't apply, but provides the SQL query to the storage node too. The storage cell software speaks SQL, that is, it is able to prefilter and through that transfer only the relevant data. With this, the traffic between database nodes and storage cells is reduced immensely. Less I/O is a good thing - as they say, all the CPUs of the world do one thing just as fast as any other - and that is waiting for I/O.  They don't only pre-filter, but also provide data preprocessing features - e.g. if a DB-node requests an aggregate of data, they can calculate it, and handover only the results, not the whole set. Again, less data to transfer.  They support the magical HCC, (Hybrid Columnar Compression). That is, data can be stored in a precompressed form on the storage. Less data to transfer.  Of course one can't simply rely on disks for performance, there is Flash Storage included there for caching.  III. The low latency, high-speed backbone network: InfiniBand, that interconnects all the members with: Real High Speed: 40 Gbit/s. Full Duplex, of course. Oh, and a really low latency.  RDMA. Remote Direct Memory Access. This technology allows the DB nodes to do exactly that. Remotely, directly placing SQL commands into the Memory of the storage cells. Dodging all the network-stack bottlenecks, avoiding overhead, placing requests directly into the process queue.  You can also run IP over InfiniBand if you please - that's the way the compute nodes can communicate with each other.  IV. Including a general-purpose storage too: the ZFSSA, which is a unified storage, providing NAS and SAN access too, with the following features:  NFS over RDMA over InfiniBand. Nothing is faster network-filesystem-wise.  All the ZFS features onboard, hybrid storage pools, compression, deduplication, snapshot, replication, NFS and CIFS shares Storageheads in a HA-Cluster configuration providing availability of the data  DTrace Live Analytics in a web-based Administration UI Being a general purpose application data storage for your non-database applications running on the SPARC SuperCluster over whichever protocol they prefer, easily replicating, snapshotting, cloning data for them.  There's a lot of great technology included in Oracle's SPARC SuperCluster, we have talked its interior through. As for external scalability: you can start with a half- of full- rack SPARC SuperCluster, and scale out to several racks - that is, stacking not separate full-rack SPARC SuperClusters, but extending always one large instance of the size of several full-racks. Yes, over InfiniBand network. Add racks as you grow.  What technologies shall run on it? SPARC SuperCluster is a general purpose scaleout consolidation/cloud environment. You can run Oracle Databases with RAC scaling, or Oracle Weblogic (end enjoy the SPARC T4's advantages to run Java). Remember, Oracle technologies have been integrated with the Oracle Engineered Systems - this is the Oracle on Oracle advantage. But you can run other software environments such as SAP if you please too. Run any application that runs on Oracle Solaris 10 or Solaris 11. Separate them in Virtual Machines, or even Oracle Solaris Zones, monitor and manage those from a central UI. Here the key takeaways once again: The SPARC SuperCluster: Is a pre-integrated Engineered System Contains SPARC T4-4 servers with built-in virtualization, cryptography, dynamic threading Contains the Exadata storage cells that intelligently offload the burden of the DB-nodes  Contains a highly available ZFS Storage Appliance, that provides SAN/NAS storage in a unified way Combines all these elements over a high-speed, low-latency backbone network implemented with InfiniBand Can grow from a single half-rack to several full-rack size Supports the consolidation of hundreds of applications To summarize: All these technologies are great by themselves, but the real value is like in every other Oracle Engineered System: Integration. All these technologies are tuned to perform together. Together they are way more than the sum of all - and a careful and actually very time consuming integration process is necessary to orchestrate all these for performance. The SPARC SuperCluster's goal is to enable infrastructure operations and offer a pre-integrated solution that can be architected and delivered in hours instead of months of evaluations and tests. The tedious and most importantly time and resource consuming part of the work - testing and evaluating - has been done.  Now go, provide services.   -- charlie  

    Read the article

  • Apache - setting up a subdomain

    - by Adam
    I'm having trouble getting a subdomain working for an Apache Linux Install. Following is what I've configured: DNS: connect.goneglobal.com. CNAME 54.251.35.112 Apache httpd.conf: <VirtualHost *:80> DocumentRoot /var/www/html/connect.goneglobal.com ServerName connect.goneglobal.com </VirtualHost> restart httpd - this ip is registered to this server - works for other sites on this apache. (first time I've tried a subdomain). Appears the issue is with DNS potentially and it doesn't seem to get to the site. Note: I have an index.php in the Documentroot. Note: there is an A record for goneglobal.com. which goes to a different hosting provider. thx

    Read the article

  • Ubuntu to Ubuntu VNC over SSH tunnel

    - by rxt
    I have a Linux Ubuntu desktop at home, ssh enabled, vnc server installed, router rule configured. It all works, and at home I can connect via the local network from my Mac. From the outside I can login via ssh. I've configured putty as follows: session: host name and port number connection ssh tunnel: forwarded ports: L5900|192.168.0.23 the local address is: 192.168.1.45 When I make the connection I can login to the remote machine. Then I open Remote Desktop Viewer. I click connect protocol: vnc host: ? use host as ssh tunnel: ? I don't know what to use for the last two options. Which ip-addresses should I use?

    Read the article

  • Why do strace/truss sometimes 'fix' stuck processes?

    - by Emmel
    Sometimes you have a stuck process that's been stuck for a while, and as soon as you go to poke at it with strace/truss just to see what's going on, it gets magically unstuck and continues to run! So from merely 'observing' these programs have some impact in the running of the stuck programs .. what's happening here? Did strace (I guess via ptrace(2)?) send a signal, causing the program to cease blocking, or such? I've seen this several times -- most recently on Linux RHEL 4 (and a Perl script mucking with processes and doing some network IO in that case), but in a few other contexts as well. Unfortunately, I can't reproduce this, as it times to happen ... in times of crisis. But my curiosity remains. :-) Any elucidation appreciated.

    Read the article

  • Working with EO composition associations via ADF BC SDO web services

    - by Chris Muir
    ADF Business Components support the ability to publish the underlying Application Modules (AMs) and View Objects (VOs) as web services through Service Data Objects (SDOs).  This blog post looks at a minor challenge to overcome when using SDOs and Entity Objects (EOs) that use a composition association. Using the default ADF BC EO association behaviour ADF BC components allow you to work with VOs that are based on EOs that are a part of a parent-child composition association.  A composition association enforces that you cannot create records for the child outside the context of the parent.  As example when creating invoice-lines you want to enforce the individual lines have a relating parent invoice record, it just simply doesn't make sense to save invoice-lines without their parent invoice record. In the following screenshot using the ADF BC Tester it demonstrates the correct way to create a child Employees record as part of a composition association with Departments: And the following screenshot shows you the wrong way to create an Employee record: Note the error which is enforced by the composition association: (oracle.jbo.InvalidOwnerException) JBO-25030: Detail entity Employees with row key null cannot find or invalidate its owning entity.  Working with composition associations via the SDO web services  Shay Shmeltzer recently recorded a good video which demonstrates how to expose your ADF Business Components through the SDO interface. On exposing the VOs you get a choice of operation to publish including create, update, delete and more: For example through the SDO test interface we can see that the create operation will request the attributes for the VO exposed, in this case EmployeesView1: In this specific case though, just like the ADF BC Tester, an attempt to create this record will fail with JBO-25030, the composition association is still enforced: The correct way to to do this is through the create operation on the DepartmentsView1 which also lets you create employees record in context of the parent, thus satisfying the composition association rule: Yet at issue here is the create operation will always create both the parent Departments and Employees records.  What do we do if we've already previously created the parent Departments records, and we just want to create additional Employees records for that Department?  The create method of the EmployeeView1 as we saw previously doesn't allow us to do that, the JBO-3050 error will be raised. The solution is the "merge" operation on the parent Departments record: In this case for the Departments record you just need to supply the DepartmentId of the Department you want the Employees record to be associated with, as well as the new Employees record.  When invoked only the Employees record is created, and the supply of the DepartmentId of the Departments record satisfies the composition association without actually creating or updating the associated Department record that already exists in the database. Be warned however if you supply any more attributes for the Department record, it will result in a merge (update) of the associated Departments record too. 

    Read the article

  • Apache is running; however, it reports that it is not, and it will not restart.

    - by solo
    Apache is running; however, it reports that it is not, and it will not restart. # /etc/init.d/httpd status httpd.worker is stopped # /usr/sbin/lsof -iTCP:80 COMMAND PID USER FD TYPE DEVICE SIZE NODE NAME httpd.wor 1169 root 3u IPv6 2974 TCP *:http (LISTEN) httpd.wor 1211 daemon 3u IPv6 2974 TCP *:http (LISTEN) httpd.wor 1213 daemon 3u IPv6 2974 TCP *:http (LISTEN) httpd.wor 1215 daemon 3u IPv6 2974 TCP *:http (LISTEN) httpd.wor 1352 daemon 3u IPv6 2974 TCP *:http (LISTEN) #/etc/init.d/httpd restart Stopping httpd: [FAILED] Starting httpd: [Wed Mar 24 10:33:51 2010] [warn] module proxy_ajp_module is already loaded, skipping (98)Address already in use: make_sock: could not bind to address [::]:80 (98)Address already in use: make_sock: could not bind to address 0.0.0.0:80 no listening sockets available, shutting down Unable to open logs [FAILED] OS: Linux DISTRO: CENTOS 5 Restarting the server didn't help, nor did killing apache and starting it. Any idea what is causing this inconsistency?

    Read the article

  • "Target the specific user you will be using and assign it user id 0/group 0"

    - by Jeremy Holovacs
    I am trying to virtualize an Ubuntu machine using VMWare vCenter Converter, but ran into permissions issues. I followed the instructions of part 1 and 2 on this page but when I got to "For Ubuntu operating systems further configuration is needed" I started running into trouble. I'm decent at Linux, but I'm not an experienced sysadmin. How do I Target the specific user you will be using and assign it user id 0/group 0? How do I Ensure that you also still enable Allow root to ssh even though you are not using the root account? Thanks for your help.

    Read the article

< Previous Page | 515 516 517 518 519 520 521 522 523 524 525 526  | Next Page >