Search Results

Search found 38064 results on 1523 pages for 'oracle linux'.

Page 685/1523 | < Previous Page | 681 682 683 684 685 686 687 688 689 690 691 692  | Next Page >

  • Oracle Database 11g bevet&eacute;s k&ouml;zben: Val&oacute;s felhaszn&aacute;l&oacute;i tapasztalatok

    - by Lajos Sárecz
    Tavaly tartott Magyarországon is Upgrade Workshop-ot Mike Dietrich, aki az alábbi videóban néhány érdekes ügyfél sztorit oszt meg azzal kapcsolatban, miért érdemes Oracle Database 11g-re váltani, milyen elonyei származtak azoknak az ügyfeleknek, akik már túl vannak az upgrade folyamatán. Ha nem elég meggyozoek a külföldi példák, akkor 3 hét múlva a HOUG Konferencia “Korszeru adatközpontok” szekciójában magyar ügyfelek 11g upgrade történetei is megismerhetok lesznek.

    Read the article

  • Mount "Macrium Reflect" on a partition, boot from there ?

    - by b e
    Can Macrium's Reflect recovery CD be mounted/used with GRUB ? If the cd can be 'put' (loaded/mounted/...) in a partition, then the only disc needed would be the actual recovery disc, which could be on an external hard drive, or even on the same machine in another partition, thus allowing on to recover using only what's on the machine itself. I have WXPpro and Xubuntu8.04 double mounted, really happy with them together, use each right now to fix problems with the other when they come up. Also have a partition for the Reflect CD, but I just can't get it to load from Grub, which would be great... Thanks for any thoughts, probably someone has already done this I know !

    Read the article

  • Gentoo on Mac Mini - can't get framebuffer to work

    - by user42055
    I have the last Gentoo on an intel mac mini with 945G graphics. I'm trying to start X (with no config) but it complains that /dev/fb0 doesn't exist. I've tried adding the following options to the kernel boot params: video=intelfb:mode=800x600-32@60,accel,hwcursor vga=761 Because I read that the fb might not be enabled unless you set a vga= option. Unfortunately the kernel doesn't recognise that option. If I changed it to vga=ask it presents me a list of about 6 text modes no greater than 80x60. In the kernel I have agpgart, drm (using i830 module) and vga text console compiled in. What am I not doing right ?

    Read the article

  • if the file changes send email about diff

    - by user62367
    I have 2 script. Script "A", Script "B". Script A is regulary watching the dhcpacks [dhcp release is configured to 2mins] in the logs, for the past 2 minutes. It writes the MAC addresses to a file [/dev/shm/dhcpacks-in-last-2min.txt] every 2 minutes. Ok, this is working, active clients are in this file. Super! Script B: On pastebin I'm trying to create a script, that watches the changes in /dev/shm/dhcpacks-in-last-2min.txt file ( every 1 sec). Ok. But: my watcher script [the pastebined][1] is not working fine - sometime it works, sometime it sends that someoneXY logged out`, but it's not true! Nothing happened, and the problem is not in the Script A. Can someone help me point out, what am I missing? How can I watch a file (in every sec), that contains only MAC addresses, and if someone doesn't get dhcpack in 2 minutes, the file /dev/shm/dhcpacks-in-last-2min.txt changes, and that clients MAC address will be gone from it, and i need to know, who was it [pastebined my script - but somethings wrong with it]. Thank you for any help..I've been pathing my script for days now.. :\

    Read the article

  • Using Computer Management (MMC) with the Solaris CIFS Service (August 25, 2009)

    - by user12612012
    One of our goals for the Solaris CIFS Service is to provide seamless Windows interoperability: not just to deliver ubiquitous, multi-protocol file sharing, which is obviously a major part of this project, but to support Windows services at a fundamental level.  It's an ongoing mission and our latest update includes support for Windows remote management. Remote management is extremely important to Windows administrators and one of the mainstay tools is Computer Management. Computer Management is a Windows administration application, actually a collection of Microsoft Management Console (MMC) tools, that can be used to configure, monitor and manage local and remote services and resources.  The MMC is an extensible framework of registered components, known as snap-ins, which allows Computer Management to provide comprehensive management features for both the local system and remote systems on the network. Supported Computer Management features include: Share ManagementSupport for share management is relatively complete.  You can create, delete, list and configure shares.  It's not yet possible to change the maximum allowed or number of users properties but other properties, including the Share Permissions, can be managed via the MMC. Users, Groups and ConnectionsYou can view local SMB users and groups, monitor user connections and see the list of open files. If necessary, you can also disconnect users and/or close files. ServicesYou can view the SMF services running on an OpenSolaris system.  This is a read-only view - we don't support service management (the ability to start or stop) SMF services from Computer Management (yet). To ensure that only the appropriate users have access to administrative operations there are some access restrictions on these remote management features. Regular users can: List shares Only members of the Administrators or Power Users groups can: Manage shares List connections Only members of the Administrators group can: List open files and close files Disconnect users View SMF services View the EventLog Here's a screenshot when I was using Computer Management and Server Manager (another Windows remote management application) on Windows XP to view some open files on an OpenSolaris system to prepare a slide presentation on MMC support.

    Read the article

  • Home server - HP Proliant Microserver - Software and setup - OS on USB stick? [closed]

    - by Lloyd Watkin
    I've just purchased a HP ProLiant Microserver for home use. I want to set up with web server, samba shares, the usual stuff. My question is really about system setup. It has an internal USB socket so I've attempted to install a copy of Fedora 14 onto it. I turned off X/Gnome, but it still ran like a pig. I've now put the OS on one of the internal disks (250Gb, 7200rpm), but I was wondering if there was a way to utilise the internal USB to give me better power-saving allowing the hard drives to be shut down when not in use. How would you set this server up? I'd rather not go to the extra cost of an SSD right now, but if that's the best way then so be it.

    Read the article

  • using tcpdump to display XML API requests without headers or ack packets

    - by Carmageddon
    I need assistance, I am trying to use tcpdump in order to capture API requests and responses between two servers, so far I have the following command: tcpdump -iany -tpnAXs0 host xxx.xxx.xxx.xxx and port 6666 My problem is, that the output is still hard to read, because it sends the Headers, and the ack packets. I would like to remove those and only see the XML bodies. I tried to use grep -v, but apparently this is all one request, so it filters the entire thing... Thanks!

    Read the article

  • Is there software that will help me convert my PST files into a searachable web archive?

    - by chronoz
    I have used POP3 for many and many years and always used PST files for back-up purposes. I'd like to be able to create a searchable mail archive of this 12GB worth of e-mail. I had used Horde + Qmail for a while for searching e-mail, but it was truly horrible and even extremely slow when searching into a few ten thousands of e-mails, let alone more than a million. I would prefer a free solution that would provide fast searching through historical e-mails. Also, preferably hosted on a server, so I don't have to worry about backing up any more crucial data on my desktop.

    Read the article

  • Which Large File System Format to use for USB Flash drive compatible with Ubuntu/Mac/Windows?

    - by wajiw
    I've had this problem for a long time and can't find a solution. I switch between the 3 OSes all the time and use a 1TB USB Drive to do so. I can't seem to find a format that is compatible across all systems that handles large files (at least 8-9 GB). Does anyone have a solution for this? Recently I've tried exFat but that messes up the filesystem when trying to read on windows after adding files from Ubuntu (using the fuse driver). The OSes currently I'm using are Windows Vista/7, Mac OS X (10.6.5) and Ubuntu 10.10

    Read the article

  • A short but intense GCC Gathering in London

    - by user817571
    About one week ago I joined in London many long time GCC friends and acquaintances for a gathering organized by Google (in particular I guess should be thanked Diego and Ian). Only a weekend, and I wasn't able to attend on Sunday morning, but a very good occasion to raise some issues in a very relaxed way, in particular those at the border between areas of competence, which are the most difficult to discuss during the normal work days. If you are interested in a general overview and some notes this is a good link: http://gcc.gnu.org/wiki/GCCGathering2011 As you may easily guess, the third topic is mine, which I managed to have up quite early on Friday morning thanks to the votes of some good friends like Dodji (the ordering of the topics resulted from democratic voting on Friday evening!). I learned a lot from the discussion: for example that certainly the new C++11 'final' should be exploited largely in the c++ front-end; the various reasons why devirtualization can be quite trick (but I'm really confident that Martin and Honza are going to make a good progress also basing on a set of short testcases which I promised to collect); that, as explained by Ian, the gold linker already implements the nice --icf (Identical Code Folding) facility, which some friends of mine are definitely going to like (however, see: http://sourceware.org/bugzilla/show_bug.cgi?id=12919). I also enjoyed the observations made by Lawrence, where he remarked that in C+11 we are going to see more pointer iterations implicitly produced by the new range-based for-loop and we really want to make sure the loop optimizers are able to deal with those as well as loops explicitly using a counter. All in all, I really hope we are going to do it again!

    Read the article

  • Cracking WEP with Aircrack and Kismet

    - by Jenny
    Just a minor question, but I notice with aircrack when it lists networks, it does not list the encryption type of each network. Which seems fair enough, as you can use Kismet, however on my machine when I end kismet and the server, the monitor interface is not removed and I cannot remove it manually, which screws with aircrack. SO, is kismet needed to view encryption types of networks, and if so how do you use it peacefully in unison with aircrack?

    Read the article

  • password protect a VPS account?

    - by Camran
    I ordered a VPS today, and am about to upload my website. It uses java, mysql, php etc... However, I need to password protect the site at first... I use Ubuntu 9.10 and have installed LAMP just now. How can I easiest do this? Thanks PS: Have problem with the serverfault website, thats why I am posting this here. sorry.

    Read the article

  • ADF Business Components

    - by Arda Eralp
    ADF Business Components and JDeveloper simplify the development, delivery, and customization of business applications for the Java EE platform. With ADF Business Components, developers aren't required to write the application infrastructure code required by the typical Java EE application to: Connect to the database Retrieve data Lock database records Manage transactions   ADF Business Components addresses these tasks through its library of reusable software components and through the supporting design time facilities in JDeveloper. Most importantly, developers save time using ADF Business Components since the JDeveloper design time makes typical development tasks entirely declarative. In particular, JDeveloper supports declarative development with ADF Business Components to: Author and test business logic in components which automatically integrate with databases Reuse business logic through multiple SQL-based views of data, supporting different application tasks Access and update the views from browser, desktop, mobile, and web service clients Customize application functionality in layers without requiring modification of the delivered application The goal of ADF Business Components is to make the business services developer more productive.   ADF Business Components provides a foundation of Java classes that allow your business-tier application components to leverage the functionality provided in the following areas: Simplifying Data Access Design a data model for client displays, including only necessary data Include master-detail hierarchies of any complexity as part of the data model Implement end-user Query-by-Example data filtering without code Automatically coordinate data model changes with business services layer Automatically validate and save any changes to the database   Enforcing Business Domain Validation and Business Logic Declaratively enforce required fields, primary key uniqueness, data precision-scale, and foreign key references Easily capture and enforce both simple and complex business rules, programmatically or declaratively, with multilevel validation support Navigate relationships between business domain objects and enforce constraints related to compound components   Supporting Sophisticated UIs with Multipage Units of Work Automatically reflect changes made by business service application logic in the user interface Retrieve reference information from related tables, and automatically maintain the information when the user changes foreign-key values Simplify multistep web-based business transactions with automatic web-tier state management Handle images, video, sound, and documents without having to use code Synchronize pending data changes across multiple views of data Consistently apply prompts, tooltips, format masks, and error messages in any application Define custom metadata for any business components to support metadata-driven user interface or application functionality Add dynamic attributes at runtime to simplify per-row state management   Implementing High-Performance Service-Oriented Architecture Support highly functional web service interfaces for business integration without writing code Enforce best-practice interface-based programming style Simplify application security with automatic JAAS integration and audit maintenance "Write once, run anywhere": use the same business service as plain Java class, EJB session bean, or web service   Streamlining Application Customization Extend component functionality after delivery without modifying source code Globally substitute delivered components with extended ones without modifying the application   ADF Business Components implements the business service through the following set of cooperating components: Entity object An entity object represents a row in a database table and simplifies modifying its data by handling all data manipulation language (DML) operations for you. These are basically your 1 to 1 representation of a database table. Each table in the database will have 1 and only 1 EO. The EO contains the mapping between columns and attributes. EO's also contain the business logic and validation. These are you core data services. They are responsible for updating, inserting and deleting records. The Attributes tab displays the actual mapping between attributes and columns, the mapping has following fields: Name : contains the name of the attribute we expose in our data model. Type : defines the data type of the attribute in our application. Column : specifies the column to which we want to map the attribute with Column Type : contains the type of the column in the database   View object A view object represents a SQL query. You use the full power of the familiar SQL language to join, filter, sort, and aggregate data into exactly the shape required by the end-user task. The attributes in the View Objects are actually coming from the Entity Object. In the end the VO will generate a query but you basically build a VO by selecting which EO need to participate in the VO and which attributes of those EO you want to use. That's why you have the Entity Usage column so you can see the relation between VO and EO. In the query tab you can clearly see the query that will be generated for the VO. At this stage we don't need it and just use it for information purpose. In later stages we might use it. Application module An application module is the controller of your data layer. It is responsible for keeping hold of the transaction. It exposes the data model to the view layer. You expose the VO's through the Application Module. This is the abstraction of your data layer which you want to show to the outside word.It defines an updatable data model and top-level procedures and functions (called service methods) related to a logical unit of work related to an end-user task. While the base components handle all the common cases through built-in behavior, customization is always possible and the default behavior provided by the base components can be easily overridden or augmented. When you create EO's, a foreign key will be translated into an association in our model. It defines the type of relation and who is the master and child as well as how the visibility of the association looks like. A similar concept exists to identify relations between view objects. These are called view links. These are almost identical as association except that a view link is based upon attributes defined in the view object. It can also be based upon an association. Here's a short summary: Entity Objects: representations of tables Association: Relations between EO's. Representations of foreign keys View Objects: Logical model View Links: Relationships between view objects Application Model: interface to your application  

    Read the article

  • EBS Concurrent Processing Information Center

    - by LuciaC
    Do you have problems or questions about concurrent request processing?  Do you want to know: How and when to run CP Diagnostics? What are the latest Hot Topics being looked at for Concurrent Processing? All about the Concurrent Process Analyzer self-service Health-Check script? Go to the EBS Concurrent Processing Information Center (Doc ID 1304305.1) and find out the above and lots more!

    Read the article

  • How can I get Gnome-Do to open in multiple X Screens?

    - by btelles
    Hi, I LOVE Gnome-Do (the Ubuntu version of QuickSilver). The only thing is that I have several monitors, which are all completely separate X Screens (I.E. I can't move windows between them), and Gnome-Do will only open in ONE of those monitors. If I go to Monitor/Screen #2 and press Super+Space, the Gnome-Do window appears in the first monitor. Is it possible to get a separate Instance of Gnome-Do on each Screen? P.S. Using profiles may be a work-around...I've managed to get multiple instances of Firefox by using "firefox -P my_first_screen"...anything like that available in Gnome-do?

    Read the article

  • OWB – OWBLand on SourceForge

    - by David Allan
    There are a bunch of interesting utilities that are either experts or OMB scripts that are hosted on SourceForge by some keen OWB users (see the home here). One of the main initiatives has been an Excel to OWB ‘one click ETL’ utility, which looks to have had a fair amount of code added, there is an example but its kinda light on documentation, but does look like it covers quite a lot. One of the nice things about SourceForge is that you can peek into the statistics and see what kind of activity has gone on, from last August there have been a bunch of downloads with a big peak last November… Another utility that is there is one to generate OMB from a mapping definition, a bunch of useful stuff there - http://sourceforge.net/projects/owbland/files/

    Read the article

  • apache returning "The connection was reset"

    - by usjes
    One of my dedicated servers had some network issue today and the data center has to replace some router. Since then the sites on that server returns "The connection was reset" error most of the time. I tried installing nginx and it opens better, but it still shows the error sometimes. Everything in the config seems normal, what could be causing this error? UPDATE: Just noticed that in whm apache status there are always only 1 requests currently being processed, 8 idle workers. I know for sure the server received thousands of requests per minute. What could be limiting this to such a low number?

    Read the article

  • Difference between ps output and top output?

    - by Soumya Prasad Ukil
    I find it difficult to understand the output produced by ps and top? This is the output by top: PID PSID USERNAME TID PRI NICE SIZE RES STATE TIME CPU COMMAND 26439 23712 soumyau 26439 15 0 7512M 5234M sleep 286:25 16.67% or_lse2 (18) 26523 23712 soumyau 26439 -2 0 7512M 5234M cpu9 143:10 8.33% or_lse2 26522 23712 soumyau 26439 -2 0 7512M 5234M cpu3 143:10 8.33% or_lse2 This is by ps (ps -L -p 26439 -o pcpu,psr,pid,user,tid): %CPU PSR PID USER TID 99.9 3 26439 soumyau 26522 99.9 9 26439 soumyau 26523 0.0 8 26439 soumyau 26439 Why are there differences in two result? Can you briefly explain the significance of the two CPU% ?

    Read the article

  • What can cause peaks in pagetables in /proc/meminfo ?

    - by Fuzzy76
    I have a gameserver running Debian Lenny on a VPS host. Even when experiencing a fairly low load, the players start experiencing major lag (ping times rise from 50 ms to 150-500 ms) in bursts of 3 - 10 seconds. I have installed Munin server monitoring, but when looking at the graphs it looks like the server has plenty of CPU, RAM and bandwidth available. The only weird thing I noticed is some peaks in the memory graph attributed to "page_tables" which maps to PageTables in /proc/meminfo but I can't find any good information on what this might mean. Any ideas what might be causing this? If you need any more graps, just let me know. The interrupts/second count is at roughly 400-600 during this period (nearly all from eth0). The drop in committed was caused by me trying to lower the allocated memory for the server from 512MB to 256MB, but that didn't seem to help.

    Read the article

  • Iptables mark incoming packet - vpn routing

    - by Tom
    I have connected my home to my workplace for out of house backup reasons through openvpn. The connection is working nicely. At work I have 5 fixed IP addresses. Now I would like to assign one of these IP addresses to be forwarded to my home machine. I have confirmed packet arrival at my home machine with tcpdump. The problem is that my default route at home is NOT the tun0 (naturally), but eth0 to my own ISP. So I created a separate routing table to route my tun0 packets back to where they belong, but do not how to mark the incoming packet which arrive through tun0 with iptables, so I can drive them back. I do not want any port restrictions, but only what comes from tun0 should leave through tun0 thanks tom

    Read the article

  • How to mount remote samba share from local host with multiple groups?

    - by Dragos
    I am using mount.cifs to mount a remote samba share (both client and server are Ubuntu server 8.04) like this: mount.cifs //sambaserver/samba /mountpath -o credentials=/path/.credentials,uid=someuser,gid=1000 $ cat .credentials username=user password=password I mounted a user from local system with username and password with mount.cifs but the problem is that the user is part of multiple groups on the remote system and with mount.cifs I can only specify one gid. Is there a way to specify all the gids that the remote user has? Is there a way to: Mount the remote samba with multiple groups on the local system? Browse the mount from 1) with the terminal since I want to pass some files from samba as arguments to local programs. Other solutions would be: nautilus sftp:// which runs through gvfs; but the newer gnome does not write to disk the ~/.gvfs anymore so I can't browse it in terminal. And the last solution would be NFS but that means that I have to synchronize the uids and gids on the local system with the ones from the server.

    Read the article

  • How can I download a package and all of its dependencies with apt-get

    - by Velislav Marinov
    I'm using Ubuntu 12.04 and I would like to use apt-get to download a package and all of it's depenedcies. Those packages will have to be installed on computers with no internet connection, so in addition to the base package I also need to all of the package's dependcies as well. Is there an easy way to do this (like in muon package manager)? I now that I can use the apt-get download command for this, but I don't want to manually specify each package that muon recommends to install or upgrade.

    Read the article

  • Too many Bind query (cache) denied, DNS attack?

    - by Jake
    Once Bind crashed and I did: tail -f /var/log/messages I see a massive number of logs every second. Is this a DNS attack? or is there something wrong? Sometimes I see a domain in logs like this: dOmAin.com (upper and lower). As you see there is only one single domain in the logs with different IPs Oct 10 02:21:26 mail named[20831]: client 74.125.189.18#38921: query (cache) 'ns1.domain2.com/A/IN' denied Oct 10 02:21:26 mail named[20831]: client 192.221.144.171#38833: query (cache) 'domain.com/A/IN' denied Oct 10 02:21:26 mail named[20831]: client 74.125.189.17#42428: query (cache) 'ns2.domain2.com/A/IN' denied Oct 10 02:21:26 mail named[20831]: client 192.221.146.27#37899: query (cache) 'domain.com/A/IN' denied Oct 10 02:21:26 mail named[20831]: client 193.203.82.66#39263: query (cache) 'domain.com/A/IN' denied Oct 10 02:21:26 mail named[20831]: client 8.0.16.170#59723: query (cache) 'domain.com/A/IN' denied Oct 10 02:21:26 mail named[20831]: client 80.169.197.66#32903: query (cache) 'dOmAin.com/A/IN' denied Oct 10 02:21:26 mail named[20831]: client 134.58.60.1#47558: query (cache) 'domain.com/A/IN' denied Oct 10 02:21:26 mail named[20831]: client 192.221.146.34#47387: query (cache) 'domain.com/A/IN' denied Oct 10 02:21:26 mail named[20831]: client 8.0.16.8#59392: query (cache) 'domain.com/A/IN' denied Oct 10 02:21:26 mail named[20831]: client 74.125.189.19#64395: query (cache) 'domain.com/A/IN' denied Oct 10 02:21:26 mail named[20831]: client 217.72.163.3#42190: query (cache) 'domain.com/A/IN' denied Oct 10 02:21:26 mail named[20831]: client 83.146.21.252#22020: query (cache) 'domain.com/A/IN' denied Oct 10 02:21:26 mail named[20831]: client 192.221.146.116#57342: query (cache) 'domain.com/A/IN' denied Oct 10 02:21:26 mail named[20831]: client 193.203.82.66#52020: query (cache) 'domain.com/A/IN' denied Oct 10 02:21:26 mail named[20831]: client 8.0.16.72#64317: query (cache) 'domain.com/A/IN' denied Oct 10 02:21:26 mail named[20831]: client 80.169.197.66#31989: query (cache) 'dOmAin.com/A/IN' denied Oct 10 02:21:26 mail named[20831]: client 74.125.189.18#47436: query (cache) 'ns2.domain2.com/A/IN' denied Oct 10 02:21:26 mail named[20831]: client 74.125.189.16#44005: query (cache) 'ns1.domain2.com/A/IN' denied Oct 10 02:21:26 mail named[20831]: client 85.132.31.10#50379: query (cache) 'domain.com/A/IN' denied Oct 10 02:21:26 mail named[20831]: client 94.241.128.3#60106: query (cache) 'domain.com/A/IN' denied Oct 10 02:21:26 mail named[20831]: client 85.132.31.10#59118: query (cache) 'domain.com/A/IN' denied Oct 10 02:21:26 mail named[20831]: client 212.95.135.78#27811: query (cache) 'domain.com/A/IN' denied /etc/resolv.conf ; generated by /sbin/dhclient-script nameserver 4.2.2.4 nameserver 8.8.4.4 Bind config: // generated by named-bootconf.pl options { directory "/var/named"; /* * If there is a firewall between you and nameservers you want * to talk to, you might need to uncomment the query-source * directive below. Previous versions of BIND always asked * questions using port 53, but BIND 8.1 uses an unprivileged * port by default. */ // query-source address * port 53; allow-transfer { none; }; allow-recursion { localnets; }; //listen-on-v6 { any; }; notify no; }; // // a caching only nameserver config // controls { inet 127.0.0.1 allow { localhost; } keys { rndckey; }; }; zone "." IN { type hint; file "named.ca"; }; zone "localhost" IN { type master; file "localhost.zone"; allow-update { none; }; }; zone "0.0.127.in-addr.arpa" IN { type master; file "named.local"; allow-update { none; }; };

    Read the article

< Previous Page | 681 682 683 684 685 686 687 688 689 690 691 692  | Next Page >