Search Results

Search found 26283 results on 1052 pages for 'temporary table'.

Page 756/1052 | < Previous Page | 752 753 754 755 756 757 758 759 760 761 762 763  | Next Page >

  • DB2 users and groups

    - by Arun Srini
    Just want to know everyone's experience and take on managing users/authentication on a multi-node db2 cluster with users groups. I have 17 apps in production (project based company, only 2 online apps), and some 30 users with 7 groups. prodsel - group that has select privilege on all tables produpdt - update group on selective tables (as required by the apps) proddel - delete prodins - insert permissions for the group Now what my company does is when an app uses certain user (called app1user), and needs select and insert privilege on a table, they 1. grant select and insert for prodsel, prodins respectively 2. add the user under those two groups... now this creates one to many relationship between user and privileges, and this app1user also gets select on other tables granted for the prodsel group. I know this is wrong. Before I explain, I need to know how this is done elsewhere. Please share your experiences, even if you use other Databases that uses OS level authentication.

    Read the article

  • Simple ADF page using BAM Data Control

    - by [email protected]
    var gaJsHost = (("https:" == document.location.protocol) ? "https://ssl." : "http://www."); document.write(unescape("%3Cscript src='" + gaJsHost + "google-analytics.com/ga.js' type='text/javascript'%3E%3C/script%3E")); try { var pageTracker = _gat._getTracker("UA-15829414-1"); pageTracker._trackPageview(); } catch(err) {} Purpose : In this blog I will walk you through very simple steps to create an ADF page using BAM data control connection.Details : Create the projectOpen JDeveloper (make sure you have installed the SOA extension for JDev)Create new Application using "Generic Application" template.Click on "Next"Shuttle  "ADF Faces" to right pane for the project technology.Click "Finish"Create a BAM connectionIn the resource palette click on "Folder->New Connection -> BAM"Enter the connection name and click "Next"Enter Connection details Click on "Test connection" and "Finish"Create the BAM Data Control Open the IDE connection created in above step.Drag and drop "Employees" to "Data controls" palette.Select "Flat Query" and Click "Finish".Create the View Create a new JSF page.From Data control Panel drag and drop "Employees->Query->ADF Read Only table"Right click and Run the page.

    Read the article

  • Xen and HyperVM build question on os template

    - by Levi De Haan
    I recently built a server with hypervm and xen, now i know xen from command line, but hypervm ties into our whmcs and so its a requirement, however my question is this, when i build a new o/s template my partition table is gone, and i know why, but i was wondering if anyone has built anything in hypervm for adding in partition tables, so i dont have to reinvent the wheel :). i can do it command line in the created vm with fdisk, and i have tracked down the creation scripts for hypervm but i am unsure if these insert directly into the machine as it looks like a lot of the things it does are externalized and are for xen to assign things like ip address etc.. oh and on an aside when i go in to modify the .cnf file to change the boot disk from cdrom to drive on windows when i boot using hypervm it overwrites my setting again..frustrating as heck, i've been trying to track down where in the code it does this, has anyone else had this problem and if so how did you fix it if you did?

    Read the article

  • Xen and HyperVM build question on os template

    - by Levi De Haan
    I recently built a server with hypervm and xen, now i know xen from command line, but hypervm ties into our whmcs and so its a requirement, however my question is this, when i build a new o/s template my partition table is gone, and i know why, but i was wondering if anyone has built anything in hypervm for adding in partition tables, so i dont have to reinvent the wheel :). i can do it command line in the created vm with fdisk, and i have tracked down the creation scripts for hypervm but i am unsure if these insert directly into the machine as it looks like a lot of the things it does are externalized and are for xen to assign things like ip address etc.. oh and on an aside when i go in to modify the .cnf file to change the boot disk from cdrom to drive on windows when i boot using hypervm it overwrites my setting again..frustrating as heck, i've been trying to track down where in the code it does this, has anyone else had this problem and if so how did you fix it if you did?

    Read the article

  • Resize primary partition

    - by telebog
    I have a hdd with the folowing partition table 12Gb Primary Partition (ntfs) 140Gb Extended Partition (ntfs) I want to install windows 7 and I need more space for the Primary Partition. The problem is that when I resize partitons I obtain: 12Gb Primary Partition (ntfs) 110Gb Extended Partition (ntfs) 30Gb Free Space So I can't allocate the free space to primary partition because the free space is at the end of the disk. Is there a solution to extend the primary partition as: 42Gb Primary Partition (ntfs) 110Gb Extended Partition (ntfs) without repartitioning the entire disk? I used partition magic, gparted-live-0.4.6-4 and others with no success. With the Disk Management from Vista I manage to extend primary partition, but made my partitions dinamic.

    Read the article

  • Any good PostgreSQL client for linux?

    - by senotrusov
    stackoverflow points me "belongs-on-serverfault" on this, so crossposting. I am frustrated of not having a good Linux GUI administration and development tool for PostgreSQL. pgAdmin III is buggy and unusable piece of... hmm, software, compared to Windows-only PostgreSQL Maestro and EMS PostgreSQL manager. phpPgaAmin does not looks promising. EMS PostgreSQL manager can work under Wine, but such setup have a number of issues. Requirements are: Table data editing and browsing for large tables (1M+), able to jump by FK or some master-slave editing, GUI filtering and so on. ER diagrams with in-place schema editing Schema editing and browsing with all useful GUI support Schema changes log to put into DB versioning (migrations script). Tabbed interface to be able to work with a number of tables and SQL queries at once. And so on. Any ideas?

    Read the article

  • How should I evaluate the Database Solution for Large Data Application

    - by GµårÐïåñ
    Background I have been tasked to write an application that will be a combination of document and inventory management in VB.net which will be used to store document images in TIFF, PDF, XPS, TXT, DOC, PPT and so on as binary data that can be retrieved for viewing, printing, and possible OCR to be searchable as well along with meta data such as sender, recipient, type of document, date, source, etc. So the table would probably be something like: DOC_NAME, DOC_DATE, NOTES, ... DOC_BINARY (where the actual document will be put inside) Help Please I need help with understanding how to evaluate my database options. What my concern is finding a database solution that will not become unstable due to size restrictions, records limitations and performance. Some of the options are MS_SQL, SQL Express, SQLite, mySQL, and Access. Now I can pretty much eliminate Access right off the bat as it is just too limiting and not scalable. I can further eliminate SQL Express because of the 2 GB limit and again scalability. So I believe that leaves me with MS_SQL, SQLite and mySQL (note, I am open to alternatives). And this is where I need help in understanding how to evaluate those databases. The goal is that the data is all in one place (a single file) that will make backup and portability easier. For small volume usage, pretty much any solution will hold for a while, but my goal is to think ahead and make sure its able to withstand heavy large volume usage as well. Another consideration is also the interoperability with .NET and stability of such code to avoid errors and memory leaks. How should I evaluate my database options for this scenario?

    Read the article

  • Recover Time Machine partition that turned MBR only instead of GUID

    - by alex
    I have one drive that has a NTFS partition, a TimeMachine partition (I guess HFS+) and empty space. The other day, I did one partition more from Windows 8 (bootcamp) and since then, I can't see the TimeMachine one from OSX, I can see it from Windows though. The problem is that TimeMachine uses a file system that Windows cannot browse, only shows some folders and I need to recover this partition because I have to use it to backup my Mac. On OSX I can only see the NTFS partition and the other one appears unmounted and it's impossible to mount. I've come to the conclusion that something has happened to the partition table. With TestDisk it shows that it's MBR only when I think it should be GUID: And pressing p shows that it's FDisk_partition_scheme and the TimeMachine one appears as Windows_NTFS. I found this thread that is similar to what it's happening to me: Adding NTFS partition to disk in Windows makes HFS+ partition on same disk invisible in Mac OS X

    Read the article

  • MySQL works with straight php, but not in phpMyAdmin or in Drupal

    - by Marek
    I just updated from PHP 5.1 to 5.2 and both drupal and phpMyAdmin stopped being able to save information. I've checked the mysql user permissions - they look ok. I wrote some simple php to insert a row into a table, and it works, but if I try to do the same thing in phpMyAdmin, it just says "no change". phpMyAdmin will delete rows, select rows, but not insert or update them. Drupal does the same thing - it will select info from the tables ok, but not insert or update (or delete). Any ideas? I'm really starting to get desperate! Cheers, Marek

    Read the article

  • Printer monitoring script (PowerShell)

    - by HannesFostie
    I am going to write a script of some sort to check event viewer in a windows server 2003 for all printjobs, and then write them to a comma delimited textfile like printername_floor_room.txt I am wondering what the best way is to do this realtime, and keep checking the event viewer constantly. Any caveats I need to be aware of? Thanks EDIT: Okay, so I will most likely go for PowerShell and use Get-EventLog and then edit the "table" data. Problems I'm having: if I were to save all this data to a text file, how do I get the data out of it? A comma-separated file I could work with, but this, I'm not really sure. And once that is sorted out, I'm still not sure how to keep the file updated more or less real-time. Can I make this service-like, without hogging up all resources? Run it every x seconds for example?

    Read the article

  • How can I roll back xserver-xorg-core and xserver-common?

    - by Ville Sundberg
    A recent update to Xorg broke my desktop, which now looks like this: http://i.imgur.com/PbBxh.jpg In short, the desktop background is not updating on the secondary display. (And if there is no secondary display, the primary display background stops updating.) Looking into the history, I found that this happened right after upgrading two packages: xserver-xorg-core xserver-common These were upgraded to 1.9.0-0ubuntu7.3. I'd like to downgrade these packages. How do I do that? I've checked that both have another version in the maverick repo: xserver-xorg-core: Installed: 2:1.9.0-0ubuntu7.3 Candidate: 2:1.9.0-0ubuntu7.3 Version table: *** 2:1.9.0-0ubuntu7.3 0 500 http://fi.archive.ubuntu.com/ubuntu/ maverick-updates/main amd64 Packages 100 /var/lib/dpkg/status 2:1.9.0-0ubuntu7 0 500 http://fi.archive.ubuntu.com/ubuntu/ maverick/main amd64 Packages However, apt won't let me downgrade them: ville@fluxx ~ % sudo apt-get install xserver-common=2:1.9.0-0ubuntu7 xserver-xorg-core=2:1.9.0-0ubuntu7 The following packages have unmet dependencies: xserver-xorg-core : Depends: xserver-xorg but it is not going to be installed E: Broken packages And this is the reason: ville@fluxx ~ % sudo apt-get install xserver-common=2:1.9.0-0ubuntu7 xserver-xorg-core=2:1.9.0-0ubuntu7 xserver-xorg-core The following packages have unmet dependencies: xserver-xorg-core : Depends: xserver-common (>= 2:1.9.0-0ubuntu7.3) but 2:1.9.0-0ubuntu7 is to be installed E: Broken packages Am I out of options here?

    Read the article

  • Removing Eclipse completely

    - by Abhishek Bhandari
    I had an Eclipse - Galileo working fine . Suddenly it started to hang till death(eclipse crashes) when I tried to open a DB2 table form DbViewer Plugin's Db tree View . I tried many stuffs , replacing DbViewer plugin and other memory stuffs . This happens only with DBViewer. So I unziped another eclipse in another directory . But it opens the same settings,plugins and workspace of the previous eclipse .I removed the previous eclipse even the same problem exists. In simple word. How to remove eclipse completely from Windows 7?

    Read the article

  • Best way to transfer files across unstable LAN?

    - by JamesTheAwesomeDude
    This is very similar to Question 326211, but in this case, the LAN is an unstable Wi-Fi connection. I need to transfer about 11 GiB of files between two computers, both running Linux (although one may be rebooted into Windows.) Their connection is both slow and unstable (due to Linux's awful Wi-Fi support,) but removable media (such as a flash drive or external hard drive) is not an option at this time. Right now, I'm slowly transferring the files, one by one, across SFTP, but I have to reconnect each computer approximately every 90 seconds, and the computers are not very close to each other, so this is not feasible. This is not a duplicate of Question 30186; that one specifically concerns Windows 7, and all the proposed solutions involve closed-source, Windows-only programs (which are all spyware IMHO, and are all off the table even if I trusted them - one of the computers is Linux-only.)

    Read the article

  • DB2 on SPARC T3 Tuning Tips

    - by cherry.shu(at)oracle.com
    With the new self tuning feature in DB2 V9.x, a lot of database parameters are set to automatic in DB2 v9.7 by default so that DB2 can adjust the values as needed. Most should work fine without manual tweaks. But for transaction workload on SPARC T3 systems, two parameters need to be adjust manually to achieve optimal performance. DATABASE_MEMORY: When this parameter is set to AUTOMATIC and SELF_TUNING_MEM is set to ON, DB2 will allocate small page size (64KB) for all memory allocation, and expands and shrinks the memory as needed. In order to take advantage of the large page size (up to 256MB) supported by the SPARC T3, we need to manually set the size of the DATABASE_MEMORY so that DB2 can use 256MB page size for its buffer pools which are implemented as ISM segments. I know this sounds strange as it seems that you turn a switch and it ends up controlling another function. pmap(1M) output can verify the page sizes used by DB2 db2sysc process. NUM_IOCLEANERS: This parameter defines the number of page cleaners. The default value of this parameter is AUTOMATIC, which is calculated based on the number of available CPUs and the number of logical partitions. On a SPARC T3 system where there are over a hundred of virtual CPUs and single DB2 partition, DB2 would set it to #CPUs - 1. This would lead to too many page cleaners to compete flushing to disks and cause aio mutex lock contentions. So we need to decrease the value for it. The good practice is to set the value to the number of physical devices that are used by the database table space containers.

    Read the article

  • How to know which partition is which?

    - by user206870
    Well I was just wondering what partition belongs to which. On my computer I have Windows 7 and two Ubuntu systems (it was an accident, which is why I need to know which partition is which). So how do I know which one is which?? PS here's the codes: jp@jp-Satellite-L555D:~$ sudo update-grub [sudo] password for jp: Generating grub.cfg ... Found linux image: /boot/vmlinuz-3.11.0-12-generic Found initrd image: /boot/initrd.img-3.11.0-12-generic Found memtest86+ image: /boot/memtest86+.bin Found Windows 7 (loader) on /dev/sda1 Found Windows 7 (loader) on /dev/sda2 Found Windows Recovery Environment (loader) on /dev/sda3 Found Ubuntu 13.10 (13.10) on /dev/sda7 done jp@jp-Satellite-L555D:~$ sudo fdisk -l Disk /dev/sda: 250.1 GB, 250059350016 bytes 255 heads, 63 sectors/track, 30401 cylinders, total 488397168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0xf6f5148e Device Boot Start End Blocks Id System /dev/sda1 * 2048 3074047 1536000 27 Hidden NTFS WinRE /dev/sda2 3074048 213421022 105173487+ 7 HPFS/NTFS/exFAT /dev/sda3 469676032 488396799 9360384 17 Hidden HPFS/NTFS /dev/sda4 213422078 469676031 128126977 5 Extended /dev/sda5 300185600 463910911 81862656 83 Linux /dev/sda6 463912960 469676031 2881536 82 Linux swap / Solaris /dev/sda7 213422080 300185599 43381760 83 Linux Partition table entries are not in disk order Thanks to whoever can answer this. Another quick question, what is the extended partition??

    Read the article

  • Working with data and meta data that are separated on different servers

    - by afuzzyllama
    While developing a product, I've come across a situation where my group wants to store meta data for data entry forms (questions, layout, etc) in a different database then the database where the collected data is stored. This is mostly for security because we want to be able to have our meta data public facing, while keeping collected data as secure as possible. I was thinking about writing a web service that provides the meta information that the data collection program could access. The only issue I see with this approach is the front end is going to have to match the meta data with the collected data, which would be more efficient as a join on the back end. Currently, this system is slated to run on .NET and MSSQL. I haven't played around with .NET libraries running in SQL, but I'm considering trying to create logic that would pull from the web service, convert the meta data into a table that SQL can join on, and return the combined data and meta data that way. Is this solution the wrong way to approach the problem? Is there a pattern or "industry standard" way of bringing together two datasets that don't live in the same database?

    Read the article

  • How to proxy to different named databases on the same server using MySQL Proxy?

    - by cclark
    I would like to have two databases on my MySQL server: DEV_DB_A DEV_DB_B However, in order to keep everyone's scripts, Query Browser settings and anything else from changing when we switch from using on DB to another I'd like to have everyone connect to DEV_DB and then use something like MySQL Proxy running a lua script which knows the currently active DB is DEV_DB_A and routes queries to there. If we restore a fresh version of the DB to DEV_DB_B or make some changes (e.g. partition a table) we can easily switch to DEV_DB_B by changing one Lua script instead of updating references everywhere. I had hoped I might be able to symlink inside of the mysql data directory but that didn't work so it seems like MySQL Proxy is a reasonable approach. Being new to Lua and MySQL Proxy I'm wondering if anyone else has approached the problem this way and how it worked.

    Read the article

  • Permanent Routes Centos Questions

    - by user65053
    So with a little help I figured out how to setup these routes and I can set them in rc.local route add -net 208.82.236.0 netmask 255.255.255.0 dev ppp0 metric 1 route add -net 208.82.236.0 netmask 255.255.255.0 dev eth0 metric 10 my question is being that the first route is ppp0 as soon as I disconnect the modem the route is dropped how do I maintain the route or make it permanent so that next time the modem connects it will follow the route. Currently after ppp0 disconnects the route is dropped netstat -r Kernel IP routing table Destination Gateway Genmask Flags MSS Window irtt Iface laxapx03.o1.com * 255.255.255.255 UH 0 0 0 ppp0 208.82.236.0 * 255.255.255.0 U 0 0 0 eth0 10.0.1.0 * 255.255.255.0 U 0 0 0 eth0 169.254.0.0 * 255.255.0.0 U 0 0 0 eth0 default 10.0.1.1 0.0.0.0 UG 0 0 0 eth0

    Read the article

  • Internal Data Masking

    - by ACShorten
    By default, the data in the product is unmasked for authorized users. If particular data within the object is considered a candidate for data masking then the masking capabilities with the product can be used to mask the data in an appropriate fashion. The inbuilt Data Masking capabilities of the Oracle Utilities Application Framework uses a number of configuration elements: An algorithm, of type F1-MASK, is specified to configure the elements of the data masking including the masking character, number of suffix characters left unmasked, characters to ignore in the string, the application service, security type and authorization levels applicable to the mask. A Data Masking Feature Configuration is created to define where the algorithm applies. The specification of the feature allows you to define the fields to encrypt using the configured algorithm. The algorithm can be attached to a schema field, table field, characteristic, search field and even a child record (such as an identifier). The appropriate user groups are then connected to the application services with the appropriate service types and level to indicate whether the masking applies to the user group or not. For example, say there is a field called CCNBR in the product which holds the credit card details. I would create an algorithm, say CCformatCC, to mask the credit card number with the last few digits as unmasked (as the standard in most systems dictate). I would specify on the Field Mask the following: field="CCNBR", alg="CMformatCC" On the algorithm CMfomatCC, I would specify the mask, application service, security type and the authorization level which users would see the credit card unmasked. To finish the configuration off and to implemention I would connect the appropriate user groups to the application service I specified with the security type and appropriate authorization level for that group. Whenever a user accesses the CCNBR field on any of the maintenance screens, searches and other screens that use the CCNBR meta data definition would then be masked according to the user group that the user was a member of. Refer to the documentation supplied with F1-MASK algorithm type entry for more examples of what is possible.

    Read the article

  • CNet router - no field for private port

    - by Aadit M Shah
    I'm trying to configure port forwarding on my CNet router for a locally hosted HTTP server. The model number of my router is CQR-981 and the firmware version is 1.0.43. The problem is that there's no field to enter the private port of the HTTP server (the local port). According to the manual there should be one. Here's a picture of the manual: Here's a screenshot of my router page for port forwarding (with no field for private port): Is there some way I can circumvent this problem. Perhaps manually make an HTTP request to the HTTP server on the router to update the table with the private port number, or perhaps update my firmware to solve this problem.

    Read the article

  • Why does changing the physical socket on your router cause delays?

    - by Josh Browning
    My question involves the delays involved with changing which physical socket your ethernet cable is connected to. I am aware that if you are connected to a router on a network and then change which physical socket on that router you are using you will gain very small additional delays initially. However I am curious as to what causes these delays. I originally thought it was to do with the infromation stored in the routing table and whether that was allocated to a specific socket on the router or not. Although, if your IP address is the same then I don't understand why there would be delays because I would of assumed that any information within the router was linked to an IP address rather than a physical socket.

    Read the article

  • WebCenter Content shared folders for clustering

    - by Kyle Hatlestad
    When configuring a WebCenter Content (WCC) cluster, one of the things which makes it unique from some other WebLogic Server applications is its requirement for a shared file system.  This is actually not any different then 10g and previous versions of UCM when it ran directly on a JVM.  And while it is simple enough to say it needs a shared file system, there are some crucial details in how those directories are configured. And if they aren't followed, you may result in some unwanted behavior. This blog post will go into the details on how exactly the file systems should be split and what options are required. Beyond documents being stored on the file system and/or database and metadata being stored in the database along with other structured data, there is other information being read and written to on the file system.  Information such as user profile preferences, workflow item state information, metadata profiles, and other details are stored in files.  In addition, for certain processes within WCC, each of the nodes needs to know what the other nodes are doing so they don’t step on each other.  WCC keeps track of this through the use of lock files on the file system.  Because of this, each node of the WCC must have access to the same file system just as they have access to the same database. WCC uses its own locking mechanism using files, so it also needs to have access to those files without file attribute caching and without locking being done by the client (node).  If one of the nodes accesses a certain status file and it happens to be cached, that node might attempt to run a process which another node is already working on.  Or if a particular file is locked by one of the node clients, this could interfere with access by another node.  Unfortunately, when disabling file attribute caching on the file share, this can impact performance.  So it is important to only disable caching and locking on the particular folders which require it.  When configuring WebCenter Content after deploying the domain, it asks for 3 different directories: Content Server Instance Folder, Native File Repository Location, and Weblayout Folder.  And starting in PS5, it now asks for the User Profile Folder. Even if you plan on storing the content in the database, you still need to establish a Native File (Vault) and Weblayout directories.  These will be used for handling temporary files, cached files, and files used to deliver the UI. For these directories, the only folder which needs to have the file attribute caching and locking disabled is the ‘Content Server Instance Folder’.  So when establishing this share through NFS or a clustered file system, be sure to specify those options. For instance, if creating the share through NFS, use the ‘noac’ and ‘nolock’ options for the mount options. For the other directories, caching and locking should be enabled to provide best performance to those locations.   These directory path configurations are contained within the <domain dir>\ucm\cs\bin\intradoc.cfg file: #Server System PropertiesIDC_Id=UCM_server1 #Server Directory Variables IdcHomeDir=/u01/fmw/Oracle_ECM1/ucm/idc/ FmwDomainConfigDir=/u01/fmw/user_projects/domains/base_domain/config/fmwconfig/ AppServerJavaHome=/u01/jdk/jdk1.6.0_22/jre/ AppServerJavaUse64Bit=true IntradocDir=/mnt/share_no_cache/base_domain/ucm/cs/ VaultDir=/mnt/share_with_cache/ucm/cs/vault/ WeblayoutDir=/mnt/share_with_cache/ucm/cs/weblayout/ #Server Classpath variables #Additional Variables #NOTE: UserProfilesDir is only available in PS5 – 11.1.1.6.0UserProfilesDir=/mnt/share_with_cache/ucm/cs/data/users/profiles/ In addition to these folder configurations, it’s also recommended to move node-specific folders to local disk to avoid unnecessary traffic to the shared directory.  So on each node, go to <domain dir>\ucm\cs\bin\intradoc.cfg and add these additional configuration entries: VaultTempDir=<domain dir>/ucm/<cs>/vault/~temp/ TraceDirectory=<domain dir>/servers/<UCM_serverN>/logs/EventDirectory=<domain dir>/servers/<UCM_serverN>/logs/event/ And of course, don’t forget the cluster-specific configuration values to add as well.  These can be added through Admin Server -> General Configuration -> Additional Configuration Variables or directly in the <IntradocDir>/config/config.cfg file: ArchiverDoLocks=true DisableSharedCacheChecking=true ServiceAllowRetry=true    (use only with Oracle RAC Database)PublishLockTimeout=300000  (time can vary depending on publishing time and number of nodes) For additional information and details on clustering configuration, I highly recommend reviewing document [1209496.1] on the support site.  In addition, there is a great step-by-step guide on setting up a WebCenter Content cluster [1359930.1].

    Read the article

  • How to diagnose very slow pagefile

    - by svick
    Quite often, one of the applications I use freezes (“does not respond”) for a while, in extreme cases for few minutes. This happens especially when when switching apps. During this time, the HDD light flashes constantly and perfmon show that HDD is used 100% of the time (OTOH, CPU isn't) and that pagefile is being read (which is to be expected when switching apps), but at a very slow rate. When I sort the disk table in perfmon by read or write, the file read and wrote the most is the pagefile, but it's still quite low rate (I don't remember the numbers). How can I diagnose what's causing this? I use Windows Vista, and the computer is quite ordinary two years old laptop.

    Read the article

  • unknown module in my server to get PHP errors in HTML tables

    - by Javier Novoa C.
    Sorry to ask this... I manage Apache and PHP in my computer. But having installed a lot of things, I've lost track of some of them. (Things I find really useful to have at my job, or to restore in case of emergency). The problem is that I have installed this thing which displays PHP errors in a nice and colored html table, but can't remember what I have installed or configured to get it work like it. Can you give me a hint about it? I'm using Debian Lenny, Apache 2.2 and PHP 5.2 Here's a screenshot: Thank you very much for reading. Javier

    Read the article

  • Is it bad practice for services to share a database in SOA?

    - by Paul T Davies
    I have recently been reading Hohpe and Woolf's Enterprise Integration Patterns, some of Thomas Erl's books on SOA and watching various videos and podcasts by Udi Dahan et al. on CQRS and Event Driven systems. Systems in my place of work suffer from high coupling. Although each system theoretically has its own database, there is a lot of joining between them. In practice this means there is one huge database that all systems use. For example, there is one table of customer data. Much of what I've read seems to suggest denormalising data so that each system uses only its database, and any updates to one system are propagated to all the others using messaging. I thought this was one of the ways of enforcing the boundaries in SOA - each service should have its own database, but then I read this: http://stackoverflow.com/questions/4019902/soa-joining-data-across-multiple-services and it suggests this is the wrong thing to do. Segregating the databases does seem like a good way of decoupling systems, but now I'm a bit confused. Is this a good route to take? Is it ever recommended that you should segregate a database on, say an SOA service, an DDD Bounded context, an application, etc?

    Read the article

< Previous Page | 752 753 754 755 756 757 758 759 760 761 762 763  | Next Page >