Search Results

Search found 24675 results on 987 pages for 'table'.

Page 697/987 | < Previous Page | 693 694 695 696 697 698 699 700 701 702 703 704  | Next Page >

  • How should I evaluate the Database Solution for Large Data Application

    - by GµårÐïåñ
    Background I have been tasked to write an application that will be a combination of document and inventory management in VB.net which will be used to store document images in TIFF, PDF, XPS, TXT, DOC, PPT and so on as binary data that can be retrieved for viewing, printing, and possible OCR to be searchable as well along with meta data such as sender, recipient, type of document, date, source, etc. So the table would probably be something like: DOC_NAME, DOC_DATE, NOTES, ... DOC_BINARY (where the actual document will be put inside) Help Please I need help with understanding how to evaluate my database options. What my concern is finding a database solution that will not become unstable due to size restrictions, records limitations and performance. Some of the options are MS_SQL, SQL Express, SQLite, mySQL, and Access. Now I can pretty much eliminate Access right off the bat as it is just too limiting and not scalable. I can further eliminate SQL Express because of the 2 GB limit and again scalability. So I believe that leaves me with MS_SQL, SQLite and mySQL (note, I am open to alternatives). And this is where I need help in understanding how to evaluate those databases. The goal is that the data is all in one place (a single file) that will make backup and portability easier. For small volume usage, pretty much any solution will hold for a while, but my goal is to think ahead and make sure its able to withstand heavy large volume usage as well. Another consideration is also the interoperability with .NET and stability of such code to avoid errors and memory leaks. How should I evaluate my database options for this scenario?

    Read the article

  • MySQL works with straight php, but not in phpMyAdmin or in Drupal

    - by Marek
    I just updated from PHP 5.1 to 5.2 and both drupal and phpMyAdmin stopped being able to save information. I've checked the mysql user permissions - they look ok. I wrote some simple php to insert a row into a table, and it works, but if I try to do the same thing in phpMyAdmin, it just says "no change". phpMyAdmin will delete rows, select rows, but not insert or update them. Drupal does the same thing - it will select info from the tables ok, but not insert or update (or delete). Any ideas? I'm really starting to get desperate! Cheers, Marek

    Read the article

  • Printer monitoring script (PowerShell)

    - by HannesFostie
    I am going to write a script of some sort to check event viewer in a windows server 2003 for all printjobs, and then write them to a comma delimited textfile like printername_floor_room.txt I am wondering what the best way is to do this realtime, and keep checking the event viewer constantly. Any caveats I need to be aware of? Thanks EDIT: Okay, so I will most likely go for PowerShell and use Get-EventLog and then edit the "table" data. Problems I'm having: if I were to save all this data to a text file, how do I get the data out of it? A comma-separated file I could work with, but this, I'm not really sure. And once that is sorted out, I'm still not sure how to keep the file updated more or less real-time. Can I make this service-like, without hogging up all resources? Run it every x seconds for example?

    Read the article

  • How can I roll back xserver-xorg-core and xserver-common?

    - by Ville Sundberg
    A recent update to Xorg broke my desktop, which now looks like this: http://i.imgur.com/PbBxh.jpg In short, the desktop background is not updating on the secondary display. (And if there is no secondary display, the primary display background stops updating.) Looking into the history, I found that this happened right after upgrading two packages: xserver-xorg-core xserver-common These were upgraded to 1.9.0-0ubuntu7.3. I'd like to downgrade these packages. How do I do that? I've checked that both have another version in the maverick repo: xserver-xorg-core: Installed: 2:1.9.0-0ubuntu7.3 Candidate: 2:1.9.0-0ubuntu7.3 Version table: *** 2:1.9.0-0ubuntu7.3 0 500 http://fi.archive.ubuntu.com/ubuntu/ maverick-updates/main amd64 Packages 100 /var/lib/dpkg/status 2:1.9.0-0ubuntu7 0 500 http://fi.archive.ubuntu.com/ubuntu/ maverick/main amd64 Packages However, apt won't let me downgrade them: ville@fluxx ~ % sudo apt-get install xserver-common=2:1.9.0-0ubuntu7 xserver-xorg-core=2:1.9.0-0ubuntu7 The following packages have unmet dependencies: xserver-xorg-core : Depends: xserver-xorg but it is not going to be installed E: Broken packages And this is the reason: ville@fluxx ~ % sudo apt-get install xserver-common=2:1.9.0-0ubuntu7 xserver-xorg-core=2:1.9.0-0ubuntu7 xserver-xorg-core The following packages have unmet dependencies: xserver-xorg-core : Depends: xserver-common (>= 2:1.9.0-0ubuntu7.3) but 2:1.9.0-0ubuntu7 is to be installed E: Broken packages Am I out of options here?

    Read the article

  • Removing Eclipse completely

    - by Abhishek Bhandari
    I had an Eclipse - Galileo working fine . Suddenly it started to hang till death(eclipse crashes) when I tried to open a DB2 table form DbViewer Plugin's Db tree View . I tried many stuffs , replacing DbViewer plugin and other memory stuffs . This happens only with DBViewer. So I unziped another eclipse in another directory . But it opens the same settings,plugins and workspace of the previous eclipse .I removed the previous eclipse even the same problem exists. In simple word. How to remove eclipse completely from Windows 7?

    Read the article

  • Recover Time Machine partition that turned MBR only instead of GUID

    - by alex
    I have one drive that has a NTFS partition, a TimeMachine partition (I guess HFS+) and empty space. The other day, I did one partition more from Windows 8 (bootcamp) and since then, I can't see the TimeMachine one from OSX, I can see it from Windows though. The problem is that TimeMachine uses a file system that Windows cannot browse, only shows some folders and I need to recover this partition because I have to use it to backup my Mac. On OSX I can only see the NTFS partition and the other one appears unmounted and it's impossible to mount. I've come to the conclusion that something has happened to the partition table. With TestDisk it shows that it's MBR only when I think it should be GUID: And pressing p shows that it's FDisk_partition_scheme and the TimeMachine one appears as Windows_NTFS. I found this thread that is similar to what it's happening to me: Adding NTFS partition to disk in Windows makes HFS+ partition on same disk invisible in Mac OS X

    Read the article

  • DB2 on SPARC T3 Tuning Tips

    - by cherry.shu(at)oracle.com
    With the new self tuning feature in DB2 V9.x, a lot of database parameters are set to automatic in DB2 v9.7 by default so that DB2 can adjust the values as needed. Most should work fine without manual tweaks. But for transaction workload on SPARC T3 systems, two parameters need to be adjust manually to achieve optimal performance. DATABASE_MEMORY: When this parameter is set to AUTOMATIC and SELF_TUNING_MEM is set to ON, DB2 will allocate small page size (64KB) for all memory allocation, and expands and shrinks the memory as needed. In order to take advantage of the large page size (up to 256MB) supported by the SPARC T3, we need to manually set the size of the DATABASE_MEMORY so that DB2 can use 256MB page size for its buffer pools which are implemented as ISM segments. I know this sounds strange as it seems that you turn a switch and it ends up controlling another function. pmap(1M) output can verify the page sizes used by DB2 db2sysc process. NUM_IOCLEANERS: This parameter defines the number of page cleaners. The default value of this parameter is AUTOMATIC, which is calculated based on the number of available CPUs and the number of logical partitions. On a SPARC T3 system where there are over a hundred of virtual CPUs and single DB2 partition, DB2 would set it to #CPUs - 1. This would lead to too many page cleaners to compete flushing to disks and cause aio mutex lock contentions. So we need to decrease the value for it. The good practice is to set the value to the number of physical devices that are used by the database table space containers.

    Read the article

  • Best way to transfer files across unstable LAN?

    - by JamesTheAwesomeDude
    This is very similar to Question 326211, but in this case, the LAN is an unstable Wi-Fi connection. I need to transfer about 11 GiB of files between two computers, both running Linux (although one may be rebooted into Windows.) Their connection is both slow and unstable (due to Linux's awful Wi-Fi support,) but removable media (such as a flash drive or external hard drive) is not an option at this time. Right now, I'm slowly transferring the files, one by one, across SFTP, but I have to reconnect each computer approximately every 90 seconds, and the computers are not very close to each other, so this is not feasible. This is not a duplicate of Question 30186; that one specifically concerns Windows 7, and all the proposed solutions involve closed-source, Windows-only programs (which are all spyware IMHO, and are all off the table even if I trusted them - one of the computers is Linux-only.)

    Read the article

  • How to know which partition is which?

    - by user206870
    Well I was just wondering what partition belongs to which. On my computer I have Windows 7 and two Ubuntu systems (it was an accident, which is why I need to know which partition is which). So how do I know which one is which?? PS here's the codes: jp@jp-Satellite-L555D:~$ sudo update-grub [sudo] password for jp: Generating grub.cfg ... Found linux image: /boot/vmlinuz-3.11.0-12-generic Found initrd image: /boot/initrd.img-3.11.0-12-generic Found memtest86+ image: /boot/memtest86+.bin Found Windows 7 (loader) on /dev/sda1 Found Windows 7 (loader) on /dev/sda2 Found Windows Recovery Environment (loader) on /dev/sda3 Found Ubuntu 13.10 (13.10) on /dev/sda7 done jp@jp-Satellite-L555D:~$ sudo fdisk -l Disk /dev/sda: 250.1 GB, 250059350016 bytes 255 heads, 63 sectors/track, 30401 cylinders, total 488397168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0xf6f5148e Device Boot Start End Blocks Id System /dev/sda1 * 2048 3074047 1536000 27 Hidden NTFS WinRE /dev/sda2 3074048 213421022 105173487+ 7 HPFS/NTFS/exFAT /dev/sda3 469676032 488396799 9360384 17 Hidden HPFS/NTFS /dev/sda4 213422078 469676031 128126977 5 Extended /dev/sda5 300185600 463910911 81862656 83 Linux /dev/sda6 463912960 469676031 2881536 82 Linux swap / Solaris /dev/sda7 213422080 300185599 43381760 83 Linux Partition table entries are not in disk order Thanks to whoever can answer this. Another quick question, what is the extended partition??

    Read the article

  • How to proxy to different named databases on the same server using MySQL Proxy?

    - by cclark
    I would like to have two databases on my MySQL server: DEV_DB_A DEV_DB_B However, in order to keep everyone's scripts, Query Browser settings and anything else from changing when we switch from using on DB to another I'd like to have everyone connect to DEV_DB and then use something like MySQL Proxy running a lua script which knows the currently active DB is DEV_DB_A and routes queries to there. If we restore a fresh version of the DB to DEV_DB_B or make some changes (e.g. partition a table) we can easily switch to DEV_DB_B by changing one Lua script instead of updating references everywhere. I had hoped I might be able to symlink inside of the mysql data directory but that didn't work so it seems like MySQL Proxy is a reasonable approach. Being new to Lua and MySQL Proxy I'm wondering if anyone else has approached the problem this way and how it worked.

    Read the article

  • Working with data and meta data that are separated on different servers

    - by afuzzyllama
    While developing a product, I've come across a situation where my group wants to store meta data for data entry forms (questions, layout, etc) in a different database then the database where the collected data is stored. This is mostly for security because we want to be able to have our meta data public facing, while keeping collected data as secure as possible. I was thinking about writing a web service that provides the meta information that the data collection program could access. The only issue I see with this approach is the front end is going to have to match the meta data with the collected data, which would be more efficient as a join on the back end. Currently, this system is slated to run on .NET and MSSQL. I haven't played around with .NET libraries running in SQL, but I'm considering trying to create logic that would pull from the web service, convert the meta data into a table that SQL can join on, and return the combined data and meta data that way. Is this solution the wrong way to approach the problem? Is there a pattern or "industry standard" way of bringing together two datasets that don't live in the same database?

    Read the article

  • Permanent Routes Centos Questions

    - by user65053
    So with a little help I figured out how to setup these routes and I can set them in rc.local route add -net 208.82.236.0 netmask 255.255.255.0 dev ppp0 metric 1 route add -net 208.82.236.0 netmask 255.255.255.0 dev eth0 metric 10 my question is being that the first route is ppp0 as soon as I disconnect the modem the route is dropped how do I maintain the route or make it permanent so that next time the modem connects it will follow the route. Currently after ppp0 disconnects the route is dropped netstat -r Kernel IP routing table Destination Gateway Genmask Flags MSS Window irtt Iface laxapx03.o1.com * 255.255.255.255 UH 0 0 0 ppp0 208.82.236.0 * 255.255.255.0 U 0 0 0 eth0 10.0.1.0 * 255.255.255.0 U 0 0 0 eth0 169.254.0.0 * 255.255.0.0 U 0 0 0 eth0 default 10.0.1.1 0.0.0.0 UG 0 0 0 eth0

    Read the article

  • Internal Data Masking

    - by ACShorten
    By default, the data in the product is unmasked for authorized users. If particular data within the object is considered a candidate for data masking then the masking capabilities with the product can be used to mask the data in an appropriate fashion. The inbuilt Data Masking capabilities of the Oracle Utilities Application Framework uses a number of configuration elements: An algorithm, of type F1-MASK, is specified to configure the elements of the data masking including the masking character, number of suffix characters left unmasked, characters to ignore in the string, the application service, security type and authorization levels applicable to the mask. A Data Masking Feature Configuration is created to define where the algorithm applies. The specification of the feature allows you to define the fields to encrypt using the configured algorithm. The algorithm can be attached to a schema field, table field, characteristic, search field and even a child record (such as an identifier). The appropriate user groups are then connected to the application services with the appropriate service types and level to indicate whether the masking applies to the user group or not. For example, say there is a field called CCNBR in the product which holds the credit card details. I would create an algorithm, say CCformatCC, to mask the credit card number with the last few digits as unmasked (as the standard in most systems dictate). I would specify on the Field Mask the following: field="CCNBR", alg="CMformatCC" On the algorithm CMfomatCC, I would specify the mask, application service, security type and the authorization level which users would see the credit card unmasked. To finish the configuration off and to implemention I would connect the appropriate user groups to the application service I specified with the security type and appropriate authorization level for that group. Whenever a user accesses the CCNBR field on any of the maintenance screens, searches and other screens that use the CCNBR meta data definition would then be masked according to the user group that the user was a member of. Refer to the documentation supplied with F1-MASK algorithm type entry for more examples of what is possible.

    Read the article

  • Why does changing the physical socket on your router cause delays?

    - by Josh Browning
    My question involves the delays involved with changing which physical socket your ethernet cable is connected to. I am aware that if you are connected to a router on a network and then change which physical socket on that router you are using you will gain very small additional delays initially. However I am curious as to what causes these delays. I originally thought it was to do with the infromation stored in the routing table and whether that was allocated to a specific socket on the router or not. Although, if your IP address is the same then I don't understand why there would be delays because I would of assumed that any information within the router was linked to an IP address rather than a physical socket.

    Read the article

  • How to diagnose very slow pagefile

    - by svick
    Quite often, one of the applications I use freezes (“does not respond”) for a while, in extreme cases for few minutes. This happens especially when when switching apps. During this time, the HDD light flashes constantly and perfmon show that HDD is used 100% of the time (OTOH, CPU isn't) and that pagefile is being read (which is to be expected when switching apps), but at a very slow rate. When I sort the disk table in perfmon by read or write, the file read and wrote the most is the pagefile, but it's still quite low rate (I don't remember the numbers). How can I diagnose what's causing this? I use Windows Vista, and the computer is quite ordinary two years old laptop.

    Read the article

  • unknown module in my server to get PHP errors in HTML tables

    - by Javier Novoa C.
    Sorry to ask this... I manage Apache and PHP in my computer. But having installed a lot of things, I've lost track of some of them. (Things I find really useful to have at my job, or to restore in case of emergency). The problem is that I have installed this thing which displays PHP errors in a nice and colored html table, but can't remember what I have installed or configured to get it work like it. Can you give me a hint about it? I'm using Debian Lenny, Apache 2.2 and PHP 5.2 Here's a screenshot: Thank you very much for reading. Javier

    Read the article

  • CNet router - no field for private port

    - by Aadit M Shah
    I'm trying to configure port forwarding on my CNet router for a locally hosted HTTP server. The model number of my router is CQR-981 and the firmware version is 1.0.43. The problem is that there's no field to enter the private port of the HTTP server (the local port). According to the manual there should be one. Here's a picture of the manual: Here's a screenshot of my router page for port forwarding (with no field for private port): Is there some way I can circumvent this problem. Perhaps manually make an HTTP request to the HTTP server on the router to update the table with the private port number, or perhaps update my firmware to solve this problem.

    Read the article

  • Is it bad practice for services to share a database in SOA?

    - by Paul T Davies
    I have recently been reading Hohpe and Woolf's Enterprise Integration Patterns, some of Thomas Erl's books on SOA and watching various videos and podcasts by Udi Dahan et al. on CQRS and Event Driven systems. Systems in my place of work suffer from high coupling. Although each system theoretically has its own database, there is a lot of joining between them. In practice this means there is one huge database that all systems use. For example, there is one table of customer data. Much of what I've read seems to suggest denormalising data so that each system uses only its database, and any updates to one system are propagated to all the others using messaging. I thought this was one of the ways of enforcing the boundaries in SOA - each service should have its own database, but then I read this: http://stackoverflow.com/questions/4019902/soa-joining-data-across-multiple-services and it suggests this is the wrong thing to do. Segregating the databases does seem like a good way of decoupling systems, but now I'm a bit confused. Is this a good route to take? Is it ever recommended that you should segregate a database on, say an SOA service, an DDD Bounded context, an application, etc?

    Read the article

  • Can someone explain the "use-cases" for the default munin graphs?

    - by exhuma
    When installing munin, it activates a default set of plugins (at least on ubuntu). Alternatively, you can simply run munin-node-configure to figure out which plugins are supported on your system. Most of these plugins plot straight-forward data. My question is not to explain the nature of the data (well... maybe for some) but what is it that you look for in these graphs? It is easy to install munin and see fancy graphs. But having the graphs and not being able to "read" them renders them totally useless. I am going to list standard plugins which are enabled by default on my system. So it's going to be a long list. For completeness, I am also going to list plugins which I think to understand and give a short explanation as to what I think it's used for. Pleas correct if I am wrong with any of them. So let me split this questions in three parts: Plugins where I don't even understand the data Plugins where I understand the data but don't know what I should look out for Plugins which I think to understand Plugins where I don't even understand the data These may contain questions that are not necessarily aimed at munin alone. Not understanding the data usually mean a gap in fundamental knowledge on operating systems/hardware.... ;) Feel free to respond with a "giyf" answer. These are plugins where I can only guess what's going on... I hardly want to look at these "guessing"... Disk IOs per device (IOs/second)What's an IO. I know it stands for input/output. But that's as far as it goes. Disk latency per device (Average IO wait)Not a clue what an "IO wait" is... IO Service TimeThis one is a huge mess, and it's near impossible to see something in the graph at all. Plugins where I understand the data but don't know what I should look out for IOStat (blocks/second read/written)I assume, the thing to look out for in here are spikes? Which would mean that the device is in heavy use? Available entropy (bytes)I assume that this is important for random number generation? Why would I graph this? So far the value has always been near constant. VMStat (running/I/O sleep processes)What's the difference between this one and the "processes" graph? Both show running/sleeping processes, whereas the "Processes" graph seems to have more details. Disk throughput per device (bytes/second read/written) What's thedifference between this one and the "IOStat" graph? inode table usageWhat should I look for in this graph? Plugins which I think to understand I'll be guessing some things here... correct me if I am wrong. Disk usage in percent (percent)How much disk space is used/remaining. As this is approaching 100%, you should consider cleaning up or extend the partition. This is extremely important for the root partition. Firewall Throughput (packets/second)The number of packets passing through the firewall. If this is spiking for a longer period of time, it could be a sign of a DOS attack (or we are simply recieving a large file). It can also give you an idea about your firewall performance. If it's levelling out and you need more "power" you should consider load balancing. If it's levelling out and see a correlation with your CPU load, it could also mean that your hardware is not fast enough. Correlations with disk usage could point to excessive LOG targets in you FW config. eth0 errors (packets in/out)Network errors. If this value is increasing, it could be a sign of faulty hardware. eth0 traffic (bits/second in/out)Raw network traffic. This should correlate with Firewall throughput. number of threadsAn ever-increasing value might point to a process not properly closing threads. Investigate! processesBreakdown of active processes (including sleeping). A quick spike in here might point to a fork-bomb. A slowly, but ever-increasing value might point to an application spawning sub-processes but not properly closing them. Investigate using ps faux. process priorityThis shows the distribution of process priorities. Having only high-priority processes is not of much use. Consider de-prioritizing some. cpu usageFairly straight-forward. If this is spiking, you may have an attack going on, or a process is hogging the CPU. Idf it's slowly increasing and approaching max in normal operations, you should consider upgrading your hardware (or load-balancing). file table usageNumber of actively open files. If this is reaching max, you may have a process opening, but not properly releasing files. load averageShows an summarized value for the system load. Should correlate with CPU usage. Increasing values can come from a number of sources. Look for correlations with other graphs. memory usageA graphical representation of you memory. As long as you have a lot of unused+cache+buffers you are fine. swap in/outShows the activity on your swap partition. This should always be 0. If you see activity on this, you should add more memory to your machine!

    Read the article

  • EF4 CPT5 Code First Remove Cascading Deletes

    - by Dane Morgridge
    I have been using EF4 CTP5 with code first and I really like the new code.  One issue I was having however, was cascading deletes is on by default.  This may come as a surprise as using Entity Framework with anything but code first, this is not the case.  I ran into an exception with some one-to-many relationships I had: Introducing FOREIGN KEY constraint 'ProjectAuthorization_UserProfile' on table 'ProjectAuthorizations' may cause cycles or multiple cascade paths. Specify ON DELETE NO ACTION or ON UPDATE NO ACTION, or modify other FOREIGN KEY constraints. Could not create constraint. See previous errors. To get around this, you can use the fluent API and put some code in the OnModelCreating: 1: protected override void OnModelCreating(System.Data.Entity.ModelConfiguration.ModelBuilder modelBuilder) 2: { 3: modelBuilder.Entity<UserProfile>() 4: .HasMany(u => u.ProjectAuthorizations) 5: .WithRequired(a => a.UserProfile) 6: .WillCascadeOnDelete(false); 7: } This will work to remove the cascading delete, but I have to use the fluent API and it has to be done for every one-to-many relationship that causes the problem. I am personally not a fan of cascading deletes in general (for several reasons) and I’m not a huge fan of fluent APIs.  However, there is a way to do this without using the fluent API.  You can in the OnModelCreating, remove the convention that creates the cascading deletes altogether. 1: protected override void OnModelCreating(System.Data.Entity.ModelConfiguration.ModelBuilder modelBuilder) 2: { 3: modelBuilder.Conventions.Remove<OneToManyCascadeDeleteConvention>(); 4: } Thanks to Jeff Derstadt from Microsoft for the info on removing the convention all together.  There is a way to build a custom attribute to remove it on a case by case basis and I’ll have a post on how to do this in the near future.

    Read the article

  • links for 2010-06-01

    - by Bob Rhubart
    Venkatakrishnan J: Oracle BI EE 10.1.3.4.1 -- Do we need measures in a Fact Table? Troubleshooting from Rittman Mead's Venkatakrishnan J. (tags: oracle otn businessintelligence datawarehouse) Grid container support : JavaFX Composer An overview how JavaFX Composer supports the grid container. (tags: oracle sun javafx) John Brunswick: Site Studio Mobile Example - WCM Reuse The example highlighted in John Brunswick's post takes advantage of dynamic conversion capabilities in Oracle UCM that allow site content to be created and updated via MS Office documents.  (tags: oracle otn enterprise2.0) @glassfish: GlassFish 3 in the EC2 Cloud powering Dutch and Belgian community polls "The infrastructure is Amazon's Elastic Cloud Computing (EC2) environment because of the dynamic provisioning (elasticity) required by such an online service. Requests are handled directly by the grizzly layer of GlassFish with no extra front-end HTTP layer and shows great performance and scalability." -- The Aquarium (tags: oracle java sun glassfish cloud) James Morle: Flash Storage Will Be Cheap: The End of the World is Nigh "We now need technologies that look more like Oracle Exadata v2, with low-latency RDMA interfaces directly into the Operating System/Database. However, they need to easily and natively support other types of storage (unstructured data such as files, VMware datastores and so forth). The Exadata architecture lends itself well to changes in this area in both hardware trends and access protocols." -- James Morle (tags: oracle otn exadata database architecture virtualization) Java / Oracle SOA blog: HTTP binding in Soa Suite 11g PS2 (tags: ping.fm) Confessions of a Software Developer: Some Tips for Installing Oracle BPM 11g on Windows XP (tags: ping.fm) SOA and Java using Oracle technology: Book review: Oracle Coherence 3.5: Create internet scale applications using Oracle's high-performance data grid (tags: ping.fm)

    Read the article

  • E-Business Suite Proactive Support - Workflow Analyzer

    - by Alejandro Sosa
    Overview The Workflow Analyzer is a standalone, easy to run tool created to read, validate and troubleshoot Workflow components configuration as well as runtime. It identifies areas where potential problems may arise and based on set of best practices suggests the Workflow System Administrator what to do when such potential problems are found. This tool represents a proactive way to verify Workflow configuration and runtime data to prevent issues ahead of time before they may become of more considerable impact on a production environment. Installation Since it is standalone there are no pre-requisites and runs on Oracle E-Business applications from 11.5.10 onwards. It is installed in the back-end server and can be run directly from SQL*Plus. The output of this tool is written in a HTML file friendly formatted containing the following on both workflow Components configuration and Workflow Runtime data: Workflow-related database initialization parameters Relevant Oracle E-Business profile option values Workflow-owned concurrent programs schedule and Workflow components status Workflow notification mailer configuration and throughput via related queues and table Workflow-relevant recommended and critical one-off patches as well as current code level Workflow database footprint by reading Workflow run-time tables to identify aged processes not being purged. It also checks for large open and closed processes or unhealthy looping conditions in a workflow process, among other checks. See a sample of Workflow Analyzer's output here.  Besides performing the validations listed above, the Workflow Analyzer provides clarification on the issues it finds and refers the reader to specific Oracle MOS documents to address the findings or explains the condition for the reader to take proper action. How to get it? The Workflow Analyzer can be obtained from Oracle MOS Workflow Analyzer script for E-Business Suite Workflow Monitoring and Maintenance (Doc ID 1369938.1) and the supplemental note How to run EBS Workflow Analyzer Tool as a Concurrent Request (Doc ID 1425053.1) explains how to register and run this tool as a concurrent program. This way the report from the Workflow Analyzer can be submitted from the Application and its output can be seen from the application as well.

    Read the article

  • 3 TB HDD won't reactivate

    - by isif
    After doing a clean install of Windows 8 my Seagate 3 TB HDD won't reactivate in Disk Management. The two volumes are there but I can't use them for some reason. The drive was previously used with a GPT partition table, I can see the two spanned volumes but can't reactivate either. I backed up all my files from Windows 7 onto that drive and desperately need them back. What can and should I do to get the drive back up and running? When I go to Disk → Properties → Volumes, it claims the drive has a MBR partition style, so converting to GPT somehow without data loss should work. gDisk claims to be able to do that but when I point it to the drive, it claims that it has a GPT partition and a protected MBR partition. Any suggestions on what to do?

    Read the article

  • Weird routing problems with VPN

    - by Borek
    In our VPN setup I have to add a route to my routing table like this: route add 1.2.3.0 mask 255.255.255.0 172.16.1.1 -p Our internal addresses 1.2.3.x then use 172.16.1.1 as their gateway and both my local internet and work VPN can work at the same time. However, when I disconnect from VPN and reconnect again, I can't ping our servers even though the connection status is "Connected". When I do route print my previously added route is listed but it doesn't seem to work. So I try to execute that 'route add' command again and as expected, it tells me that The route addition failed: The object already exists. But - and that's the point - when I now try to ping our servers again, everything works! So every time, I have to execute this route add command that will fail but fix the issue at the same time. Any ideas what I might be doing wrong? My PC is Windows 7 x64, I am Administrator, UAC is enabled and the command prompt is run with elevated privileges.

    Read the article

  • Generating Deep Arrays: Shallow to Deep, Deep to Shallow or Bad idea?

    - by MobyD
    I'm working on an array structure that will be used as the data source for a report template in a web app. The data comes from relatively complex SQL queries that return one or many rows as one dimensional associative arrays. In the case of many, they are turned into two dimensional indexed array. The data is complex and in some cases there is a lot of it. To save trips to the database (which are extremely expensive in this scenario) I'm attempting to get all of the basic arrays (1 and 2 dimension raw database data) and put them, conditionally, into a single, five level deep array. Organizing the data in PHP seems like a better idea than by using where statements in the SQL. Array Structure Array of years( year => array of types( types => array of information( total => value, table => array of data( index => db array ) ) ) ) My first question is, is this a bad idea. Are arrays like this appropriate for this situation? If this would work, how should I go about populating it? My initial thought was shallow to deep, but the more I work on this, the more I realize that it'd be very difficult to abstract out the conditionals that determine where each item goes in the array. So it seems that starting from the most deeply nested data may be the approach I should take. If this is array abuse, what alternatives exist?

    Read the article

< Previous Page | 693 694 695 696 697 698 699 700 701 702 703 704  | Next Page >