Search Results

Search found 64711 results on 2589 pages for 'core data'.

Page 808/2589 | < Previous Page | 804 805 806 807 808 809 810 811 812 813 814 815  | Next Page >

  • Oracle Snapshot Not Working [closed]

    - by nayef harb
    i have created a snapshot that takes data from 2 tables and has a refresh rate of 1 day. The snapshot data is not refreshing it is still the same. is there something that i am missing ? Here is the code: CREATE SNAPSHOT test REFRESH COMPLETE START WITH SYSDATE NEXT sysdate + 1 AS select item_code,item_conc_code,tran_bran_code,sum(tran_qty) bal_qty from tranhist a, itemmast b where a.tran_item_code = b.item_code group by item_code,item_conc_code,tran_bran_code

    Read the article

  • Designing Databases for Rapid Resilience

    As the volume of data increases, DBAs need to plan more actively for rapid restores in the event of failure. For this, the intelligent use of filegroups is important, particularly when the Enterprise Edition of SQL Server offers the hope of online restores. How, though, should you arrange your data on the different filegroups? What happenens if the primary filegroup gets corrupted? Why backup and restore indexes?

    Read the article

  • How to Use KDE's Clipboard and Klipper App

    <b>MakeTechEasier:</b> "KDE has an advanced clipboard system, largely due to a small program called Klipper, which can store more than one piece of data. KDE also has the ability to copy and move files with copying and pasting, and automatic creation of files using clipboard data."

    Read the article

  • A quick look at: sys.dm_os_buffer_descriptors

    - by fatherjack
    SQL Server places data into cache as it reads it from disk so as to speed up future queries. This dmv lets you see how much data is cached at any given time and knowing how this changes over time can help you ensure your servers run smoothly and are adequately resourced to run your systems. This dmv gives the number of cached pages in the buffer pool along with the database id that they relate to: USE [tempdb] GO SELECT COUNT(*) AS cached_pages_count , CASE database_id ...(read more)

    Read the article

  • How To Harden PHP5 With Suhosin On CentOS 5.4

    <b>Howtoforge:</b> "This tutorial shows how to harden PHP5 with Suhosin on a CentOS 5.4 server. From the Suhosin project page: "Suhosin is an advanced protection system for PHP installations that was designed to protect servers and users from known and unknown flaws in PHP applications and the PHP core."

    Read the article

  • Openmpi 1.6.3 on ubuntu 12.10

    - by torem
    I manually installed the tar.gz of openmpi 1.6.3 on Ubuntu 12.10. But now mpif90.openmpi returns the following: Cannot open configuration file /usr/local/share/openmpi/ mpif90.openmpi-wrapper- data.txt Error parsing data file mpif90.openmpi: Not found How can I get mpif90.openmpi get running again? It was running fine if I install openmpi using apt-get install. But that way I will get only version 1.6.1. Thanks.

    Read the article

  • Notebook Review: Toshiba Tecra A11

    Toshiba's 15.6-inch business notebook doesn't skimp on features, with everything from an old-fashioned RS-232 port to facial recognition software, not to mention a fast Core i7 CPU and Nvidia graphics. Does this $1,349 laptop PC have the right stuff to serve as a desktop replacement?

    Read the article

  • Proper set up shared folders for users

    - by user221486
    First I would like to say thanks for helping, and I have huge problem with proper set up permission for shared folders. I have Windows 7 x64 ent. - name: backupfb - added to domain with shared folder on drive e: (e:\backup) 50 clients/laptops with TSM Tivoli fastback for workstations who save files on shared folder And I need to configure proper permission for my shared folders that only owner of folder can access to their folders. Folder structure is: e:\backup <- shared as a "backup" folder \\backupfb\backup\ e:\backup\BackupAdmin <-- directory is used by the Tivoli Storage Manager FastBack for Workstations client to download revisions and configurations. Nodes require read-only access to these directories e:\backup\RealTimeBackup <-- enable client accounts to create directories that are only accessible by the account that created them. As a result, the directory that contains data for a node is not created until that node connects to the server. So permission should look like that (take from instructions): Inheritable permissions from object`s parents are DISABLE Permission entries: \\backupfb\backup\BackupAdmin Allow Users Read, Execute This folder, subfolders, and files Traverse Folder / Execute Allow List Folder / Read Data Allow Read Attributes Allow Read Extended Attributes Allow Delete subfolders and files Allow Delete Allow Read Permission’s Allow Allow Administrators Full Control This folder, subfolders, and files Both folders have enabled option "apply these permissions to objects and/or containers within this container only" Here everything works fine \\backupfb\backup\RealTimeBackup <<-- Allow Administrators Full Control This folder, subfolders, and files Allow CREATOR OWNER Full Control This folder, subfolders, and files (from domain) Allow Users Special This folder only Traverse Folder / Execute Allow List Folder / Read Data Allow Read Attributes Allow Read Extended Attributes Allow Create Files / Write Data Allow Create Folders / Append Data Allow Delete subfolders and files Allow Read Permission’s Allow Allow OWNER RIGHTS* Full Control This folder, subfolders, and files Here I have huge problem with CREATOR OWNER Im able to set FULL CONTROL but I can only apply "Subfolders and files only". When I change props. to "This folder, subfolders and files" and save its change to "Subfolders and files only" So I try use icacls to set up permissions @echo off takeown /F E:\backup\ /R /A for /D %%i IN (E:\backup\RealTimeBackup*) DO icacls E:\backup\RealTimeBackup\%%~nxi /grant:r cloud\%%~nxi:F /T /C pause but after that user are able to create just one folder in \backupfb\backup\RealTimeBackup\userfolder but problem is with subfolders In log i have: FBW5022E Unable to access the specified file Explanation: The file specified is unable to be accessed. Possibly spelled incorrectly, or bad path, or permissions. User response: Ensure the user has the proper permissions for the file and directories involved andthat the file and directory exist Any idea ?? pls help ;-) thanks

    Read the article

  • No Webcam Device

    - by Aliyah
    deeva@androliyah-A6200:~$ sudo lshw -C video [sudo] password for deeva: *-display description: VGA compatible controller product: Core Processor Integrated Graphics Controller vendor: Intel Corporation physical id: 2 bus info: pci@0000:00:02.0 version: 02 width: 64 bits clock: 33MHz capabilities: msi pm vga_controller bus_master cap_list rom configuration: driver=i915 latency=0 resources: irq:43 memory:f0000000-f03fffff memory:e0000000-efffffff ioport:e080(size=8) deeva@androliyah-A6200:~$ How do I get my webcam to work?

    Read the article

  • Notebook Review: Toshiba Tecra A11

    Toshiba's 15.6-inch business notebook doesn't skimp on features, with everything from an old-fashioned RS-232 port to facial recognition software, not to mention a fast Core i7 CPU and Nvidia graphics. Does this $1,349 laptop PC have the right stuff to serve as a desktop replacement?

    Read the article

  • "Inside Job"

    Embedded databases power back-end hardware, business applications, and portable devices everywhere. Find out how Oracle embedded databases live and work at the core of hardware, software, and other devices—and deliver cash, health, and security.

    Read the article

  • Require password to login to Nexus 7

    - by gnudoc
    The default behavior in the Nexus 7 Image is to log straight in to the default user's desktop, bypassing the lightdm greeter. This seems like an acceptable behavior for testing the core but it's clearly insecure. I've changed the default password and would like lightdm to actually require the password to be entered, rather than just having a button that says "login". I've turned automatic login on and off in System Settings ? User Accounts but this doesn't help. Any suggestions?

    Read the article

  • XML DATATYPE (series 1)

    New to SQL Server 2005, is The XML data type, which lets you store XML documents and fragments in a SQL Server database. An XML fragment is an XML instance that is missing a single top-level element. You can create columns and variables of the XML type and store XML instances in them. Note that the stored representation of XML data type instances cannot exceed 2 GB.

    Read the article

  • Database Management for SharePoint 2010

    With each revision, SharePoint becomes more a SQL Server Database application, with everything that implies for planning and deployment. There are advantages to this: SharePoint can make use of mirroring, data-compression and remote BLOB storage. It can employ advanced tools such as data file compression, and object-level restore. DBAs can employ familiar techniques to speed SharePoint applications. Bert explains the way that SharePoint and SQL Server interact.

    Read the article

  • Is there a way to replicate a very large file shares in real-time?

    - by fsckin
    I have an hourly cron job that copies about 40GB of data from a source folder into a new folder with the hour appended on the end. When it's done, the job prunes anything older than 24 hours. This data changes very often during work hours and is on a samba file share. Here's how the folder structure looks: \server\Version.1 \server\Version.2 \server\Version.3 ... \server\Version.24 The contents of each new folder compared to the last one usually doesn't change very much, since this is a hourly job. Now you might be thinking that I'm an idiot for setting dreaming this up. Truth is, I just found out. It's actually been used for years and is so incredibly simple, anyone could delete the ENTIRE 40GB share (imagine that dialog spooling up... deleting thousands and thousands of files) and it would actually be faster to restore by moving the latest copy back to the source than it took to delete. Brilliant! Now to top this off, I need to efficiently replicate this 960GB of "mostly similar" data to a remote server over WAN link, with the replication happening as close to real-time as possible -- think hot spare, disaster recovery, etc. My first thought was rsync. Total failure. Rsync sees it sees a deletion of the folder that is 24 hours old and the addition of a new folder with 30GB of data to sync! I also looked at rdiff-backup and unison, they both appear to use similar algorithms and do not keep enough meta-data to do this intelligently. Best thing that I can find "out of the box" to do this is Windows Server "Distributed Filesystem Replication" which uses "Remote Differential Compression" -- After reading the background information on how this works, it actually looks like exactly what I need. Problem: Both servers are running Linux. D'oh! One approach to this I'm looking at is this, say it's 5AM and the cron job finishes: New Version.5 folder arrives at on local server SSH to remote server and copy Version.4 to Version.5 Run rsync on the local server pushing changes to the remote server. Rsync finally knows to do a differential copy between Version.4 and Version.5 Is there a smarter way to replicate Samba shares as close to real-time as possible? Anything out there that does "Remote Differential Compression" on Linux?

    Read the article

  • SEQUENCE in SQL Server 2011

    SEQUENCE is a core new feature of SQL Server 2011 (Denali). It is a more performant, flexible alternative to the INDENTITY attribute. This article introduces sequence and demonstrates how to use it and its performance advantage. Free trial of SQL Backup™“SQL Backup was able to cut down my backup time significantly AND achieved a 90% compression at the same time!” Joe Cheng. Download a free trial now.

    Read the article

  • SEO Reports - The Importance of Having Proper Reporting

    SEO is a very dynamic and complex process and as important it is for a company to provide the service it is equally important to report it correctly because a webmaster can only know what's going on if he/she gets a detailed report of the status. In case you are opting for an SEO provider it is crucial for you to know the update on the task and therefore reporting becomes a core aspect of an SEO service.

    Read the article

  • Down to the Wire - Yet More Solaris Things to See at OpenWorld (and JavaOne!)

    - by Larry Wake
    San Francisco is bracing for the annual invasion. The airport's jammed, the tweets are flying, and the numbers are crazy: more than 50,000 attendees and 2,500+ sessions, taking over Moscone Convention Center, two streets, Union Square, and seemingly every hotel in town (98,000 hotel room nights). So yeah, it's busy. And it's not just OpenWorld--we've also got JavaOne, MySQL Connect, and four other sub-events going on as well. Speaking of JavaOne, you can find Solaris-related activity there, too -- I've highlighted one hands-on lab below. Here's a last pre-event roundup of activities for consideration; enjoy the show(s)! (Remember, Schedule Builder is your friend; use it with the session numbers below to register.) Monday, October 1st: 3:15 PM - General Session: Accelerate Your Business with the Oracle Hardware Advantage(GEN9691, Moscone North Hall D) John Fowler, head of Oracle's Systems organization, will talk about Oracle hardware technology and how it's co-engineered with other key technologies, including Oracle Solaris. Tuesday, October 2nd: 10:15 AM - Building an IaaS Platform with SPARC, Oracle Solaris 11, and Oracle VM Server for SPARC(CON4431, Moscone South 270)Get the birds-eye lowdown (whatever that means) on how U.S. Cellular  built its Infrastructure as a Service (IaaS) cloud delivery platform with Oracle’s SPARC T4 servers, Oracle Solaris 11, Oracle Solaris Cluster 4, and Oracle VM Server for SPARC. The session covers the high-level design, business case made, implementation details, and lessons learned. 11:45 AM - Oracle Solaris 11 Panel: Insights and Directions from Oracle Solaris Core Engineering(CON8790, Moscone South 252) This has been one of the livelier Solaris-related sessions in years past (and I'm not saying that just because I get to moderate it this year). A panel of core engineers responsible for a wide range of key Solaris technologies will talk about some of the interesting work they've been doing -- but mostly we keep time open for the panel to take questions from attendees, because that's the fun part. Wednesday, October 3rd: 10:00 AM - Tracing Your Java Application Tuning on Oracle Solaris with DTrace(HOL10214, Hilton San Francisco, Franciscan A/B/C/D) This JavaOne hands-on lab will show how to use the DTrace framework to dynamically trace your Java applications on Oracle Solaris and uncover new tuning opportunities. Thursday, October 4th: 12:45 PM - Oracle Solaris 11: Optimized for Oracle Database, Oracle WebLogic Server, and Java(CON8800, Moscone South 252) Explore how Oracle Solaris 11 has been built to be the best platform for the cloud and enterprise applications, with built-in optimizations to improve performance and deliver unique functionality with Oracle Database, Oracle WebLogic Server, and Java.

    Read the article

  • Use Thread-local Storage to Reduce Synchronization

    Synchronization is often an expensive operation that can limit the performance of a multithreaded program. Using thread-local data structures instead of data structures shared by the threads can reduce synchronization in certain cases, allowing a program to run faster.

    Read the article

  • The Importance of Backing Up Your Website Or Blog

    Backing up a site or blog consists of storing files and data in another location. That way, if something should happen to your site or blog, you'll still have a copy of all the data. Backing up the information isn't all that difficult, and you can save a lot of time and effort in doing so.

    Read the article

  • OpenSSL sera audité et maintenu à plein temps par deux développeurs, 5,4 millions de $ alloués au financement de projets open source critiques

    Le consortium CII finance deux développeurs et un audit pour sécuriser OpenSSL 5,4 millions de dollars alloués au financement de projets open source critiquesPlus d'un mois après l'annonce de la création du consortium CII (Core Infrastructure Initiative) en réponse au tollé provoqué par la faille d'OpenSSL Heartbleed, une feuille de route a été émise. Le but de la manoeuvre est de financer certains projets libres pour les auditer ainsi que détecter et corriger leurs bugs.Dirigé par la fondation...

    Read the article

< Previous Page | 804 805 806 807 808 809 810 811 812 813 814 815  | Next Page >