Search Results

Search found 18401 results on 737 pages for 'oracle customer hub'.

Page 722/737 | < Previous Page | 718 719 720 721 722 723 724 725 726 727 728 729  | Next Page >

  • BSOD Before Windows Will Loads - Graphics Related

    - by Brian
    Alright deep breath here: (Windows 7 Home Premium 64 bit btw) Today I installed Star Craft 2 Beta. After trying to log in, it had some issues where it said my device stopped working (referring to my video device I have to imagine). After I force quit the game there were some random "hot" (various colors if i remember correctly) pixels on the screen. I decided to reboot and try again with similar results. I figured that maybe my display drivers could stand to be updated (I don't frequently update them as I don't often run into problems). I went out to nVidia's website and grabbed the latest drivers for Windows 7 64 bit GeForce 9 series. (I have SLi-ed 9800 GTs). Everything seemed to install fine and I performed the restart. This is when things went from bad (can't play SC2 beta ;) ) to worse (can't boot into windows!). Initially the very first splash screen - I think it's the bios splash screen - had lines of colored pixels covering it. It then displayed a screen that had lots of "(" on it. After that it showed the normal windows 7 splash screen as if it were going to load into Windows. Before getting much further, it BSODed on me. It was a 0x0000003B stop error. At nvlddmkm.sys. A little digging let verified that this was a problem with an nVidia graphics device, not a real shocker. Windows decided it would try to help me diagnose the problem, which it's only answer to was a System Restore, which did nothing to alleviate the problem. I was able to boot up fine in safe mode and was not able to roll back the driver, however I did uninstall the driver and reboot. I still had the graphical anomalies during the first two screens (same colored "."'s and weird "("'s), but there was NOT a stop error. Windows loaded up, found a default driver for the device and installed it and I restarted to let it load - and had yet another BSOD Stop error. Repeat driver uninstall, this time I reloaded the same version (I think it's possible that I was running a 32 bit version or a vista versus windows 7 version, but I don't have that information handy) of the nVidia driver from their website. Restart, same anomalies, same Stop Error. I am at a loss - At this point all I can think is that the firmware for the Video cards got fried or there's actual damage to the cards which I sincerely hope is not the case but the sooner I know the better. Any insight into what I might be able to do to troubleshoot/fix this problem would be most helpful. Attached below is a dump from DxDiag. Please let me know if there is more info that I could provide. ------------------ System Information ------------------ Time of this report: 3/18/2010, 23:22:48 Machine name: BRIAN-PC Operating System: Windows 7 Home Premium 64-bit (6.1, Build 7600) (7600.win7_rtm.090713-1255) Language: English (Regional Setting: English) System Manufacturer: Dell Inc System Model: XPS 630i BIOS: Phoenix - AwardBIOS v6.00PG Processor: Intel(R) Core(TM)2 Quad CPU Q8200 @ 2.33GHz (4 CPUs), ~2.3GHz Memory: 8192MB RAM Available OS Memory: 8190MB RAM Page File: 1855MB used, 14521MB available Windows Dir: C:\Windows DirectX Version: DirectX 11 DX Setup Parameters: Not found User DPI Setting: Using System DPI System DPI Setting: 96 DPI (100 percent) DWM DPI Scaling: Disabled DxDiag Version: 6.01.7600.16385 32bit Unicode DxDiag Previously: Crashed in DirectShow (stage 1). Re-running DxDiag with "dontskip" command line parameter or choosing not to bypass information gathering when prompted might result in DxDiag successfully obtaining this information ------------ DxDiag Notes ------------ Display Tab 1: No problems found. Sound Tab 1: No problems found. Sound Tab 2: No problems found. Sound Tab 3: No problems found. Input Tab: No problems found. -------------------- DirectX Debug Levels -------------------- Direct3D: 0/4 (retail) DirectDraw: 0/4 (retail) DirectInput: 0/5 (retail) DirectMusic: 0/5 (retail) DirectPlay: 0/9 (retail) DirectSound: 0/5 (retail) DirectShow: 0/6 (retail) --------------- Display Devices --------------- Card name: Manufacturer: Chip type: DAC type: Device Key: Enum\ Display Memory: n/a Dedicated Memory: n/a Shared Memory: n/a Current Mode: 1600 x 1200 (32 bit) (1Hz) Driver Name: Driver File Version: () Driver Version: DDI Version: unknown Driver Model: unknown Driver Attributes: Final Retail Driver Date/Size: , 0 bytes WHQL Logo'd: n/a WHQL Date Stamp: n/a Device Identifier: {D7B70EE0-4340-11CF-B123-B03DAEC2CB35} Vendor ID: 0x0000 Device ID: 0x0000 SubSys ID: 0x00000000 Revision ID: 0x0000 Driver Strong Name: Unknown Rank Of Driver: Unknown Video Accel: Deinterlace Caps: n/a D3D9 Overlay: n/a DXVA-HD: n/a DDraw Status: Not Available D3D Status: Not Available AGP Status: Not Available ------------- Sound Devices ------------- Description: Speakers (Realtek High Definition Audio) Default Sound Playback: Yes Default Voice Playback: Yes Hardware ID: HDAUDIO\FUNC_01&VEN_10EC&DEV_0888&SUBSYS_10280249&REV_1001 Manufacturer ID: 1 Product ID: 100 Type: WDM Driver Name: RTKVHD64.sys Driver Version: 6.00.0001.5667 (English) Driver Attributes: Final Retail WHQL Logo'd: n/a Date and Size: 8/18/2008 04:05:28, 1485592 bytes Other Files: Driver Provider: Realtek Semiconductor Corp. HW Accel Level: Basic Cap Flags: 0x0 Min/Max Sample Rate: 0, 0 Static/Strm HW Mix Bufs: 0, 0 Static/Strm HW 3D Bufs: 0, 0 HW Memory: 0 Voice Management: No EAX(tm) 2.0 Listen/Src: No, No I3DL2(tm) Listen/Src: No, No Sensaura(tm) ZoomFX(tm): No Description: Realtek Digital Output (Realtek High Definition Audio) Default Sound Playback: No Default Voice Playback: No Hardware ID: HDAUDIO\FUNC_01&VEN_10EC&DEV_0888&SUBSYS_10280249&REV_1001 Manufacturer ID: 1 Product ID: 100 Type: WDM Driver Name: RTKVHD64.sys Driver Version: 6.00.0001.5667 (English) Driver Attributes: Final Retail WHQL Logo'd: n/a Date and Size: 8/18/2008 04:05:28, 1485592 bytes Other Files: Driver Provider: Realtek Semiconductor Corp. HW Accel Level: Basic Cap Flags: 0x0 Min/Max Sample Rate: 0, 0 Static/Strm HW Mix Bufs: 0, 0 Static/Strm HW 3D Bufs: 0, 0 HW Memory: 0 Voice Management: No EAX(tm) 2.0 Listen/Src: No, No I3DL2(tm) Listen/Src: No, No Sensaura(tm) ZoomFX(tm): No Description: Realtek HDMI Output (Realtek High Definition Audio) Default Sound Playback: No Default Voice Playback: No Hardware ID: HDAUDIO\FUNC_01&VEN_10EC&DEV_0888&SUBSYS_10280249&REV_1001 Manufacturer ID: 1 Product ID: 100 Type: WDM Driver Name: RTKVHD64.sys Driver Version: 6.00.0001.5667 (English) Driver Attributes: Final Retail WHQL Logo'd: n/a Date and Size: 8/18/2008 04:05:28, 1485592 bytes Other Files: Driver Provider: Realtek Semiconductor Corp. HW Accel Level: Basic Cap Flags: 0x0 Min/Max Sample Rate: 0, 0 Static/Strm HW Mix Bufs: 0, 0 Static/Strm HW 3D Bufs: 0, 0 HW Memory: 0 Voice Management: No EAX(tm) 2.0 Listen/Src: No, No I3DL2(tm) Listen/Src: No, No Sensaura(tm) ZoomFX(tm): No --------------------- Sound Capture Devices --------------------- Description: Microphone (Realtek High Definition Audio) Default Sound Capture: Yes Default Voice Capture: Yes Driver Name: RTKVHD64.sys Driver Version: 6.00.0001.5667 (English) Driver Attributes: Final Retail Date and Size: 8/18/2008 04:05:28, 1485592 bytes Cap Flags: 0x0 Format Flags: 0x0 Description: Realtek Digital Input (Realtek High Definition Audio) Default Sound Capture: No Default Voice Capture: No Driver Name: RTKVHD64.sys Driver Version: 6.00.0001.5667 (English) Driver Attributes: Final Retail Date and Size: 8/18/2008 04:05:28, 1485592 bytes Cap Flags: 0x0 Format Flags: 0x0 ------------------- DirectInput Devices ------------------- Device Name: Mouse Attached: 1 Controller ID: n/a Vendor/Product ID: n/a FF Driver: n/a Device Name: Keyboard Attached: 1 Controller ID: n/a Vendor/Product ID: n/a FF Driver: n/a Device Name: ESA FW Update Attached: 1 Controller ID: 0x0 Vendor/Product ID: 0x0955, 0x000A FF Driver: n/a Poll w/ Interrupt: No ----------- USB Devices ----------- + USB Root Hub | Vendor/Product ID: 0x10DE, 0x026D | Matching Device ID: usb\root_hub | Service: usbhub | +-+ USB Input Device | | Vendor/Product ID: 0x0955, 0x000A | | Location: Port_#0002.Hub_#0001 | | Matching Device ID: generic_hid_device | | Service: HidUsb | | | +-+ HID-compliant device | | | Vendor/Product ID: 0x0955, 0x000A | | | Matching Device ID: hid_device | | +-+ USB Input Device | | Vendor/Product ID: 0x046D, 0xC01E | | Location: Port_#0003.Hub_#0001 | | Matching Device ID: generic_hid_device | | Service: HidUsb | | | +-+ HID-compliant mouse | | | Vendor/Product ID: 0x046D, 0xC01E | | | Matching Device ID: hid_device_system_mouse | | | Service: mouhid ---------------- Gameport Devices ---------------- ------------ PS/2 Devices ------------ + Standard PS/2 Keyboard | Matching Device ID: *pnp0303 | Service: i8042prt | + Terminal Server Keyboard Driver | Matching Device ID: root\rdp_kbd | Upper Filters: kbdclass | Service: TermDD | + Terminal Server Mouse Driver | Matching Device ID: root\rdp_mou | Upper Filters: mouclass | Service: TermDD ------------------------ Disk & DVD/CD-ROM Drives ------------------------ Drive: C: Free Space: 324.3 GB Total Space: 608.4 GB File System: NTFS Model: WDC WD64 00AAKS-75A7B SCSI Disk Device Drive: D: Free Space: 1.0 GB Total Space: 2.0 GB File System: NTFS Model: WDC WD64 00AAKS-75A7B SCSI Disk Device Drive: E: Model: TSSTcorp DVD+-RW TS-H653F SCSI CdRom Device Driver: c:\windows\system32\drivers\cdrom.sys, 6.01.7600.16385 (English), , 0 bytes -------------- System Devices -------------- Name: PCI standard PCI-to-PCI bridge Device ID: PCI\VEN_10DE&DEV_03B7&SUBSYS_000010DE&REV_A1\3&2411E6FE&1&18 Driver: n/a Name: PCI standard RAM Controller Device ID: PCI\VEN_10DE&DEV_03AF&SUBSYS_02491028&REV_A1\3&2411E6FE&1&0A Driver: n/a Name: PCI standard host CPU bridge Device ID: PCI\VEN_10DE&DEV_03A3&SUBSYS_02491028&REV_A2\3&2411E6FE&1&00 Driver: n/a Name: NVIDIA nForce Serial ATA Controller Device ID: PCI\VEN_10DE&DEV_0267&SUBSYS_02491028&REV_A1\3&2411E6FE&1&78 Driver: n/a Name: PCI standard RAM Controller Device ID: PCI\VEN_10DE&DEV_03B6&SUBSYS_02491028&REV_A1\3&2411E6FE&1&10 Driver: n/a Name: PCI standard RAM Controller Device ID: PCI\VEN_10DE&DEV_03AE&SUBSYS_02491028&REV_A1\3&2411E6FE&1&09 Driver: n/a Name: PCI standard RAM Controller Device ID: PCI\VEN_10DE&DEV_0272&SUBSYS_02491028&REV_A3\3&2411E6FE&1&52 Driver: n/a Name: NVIDIA nForce Serial ATA Controller Device ID: PCI\VEN_10DE&DEV_0266&SUBSYS_02491028&REV_A1\3&2411E6FE&1&70 Driver: n/a Name: LSI 1394 OHCI Compliant Host Controller Device ID: PCI\VEN_11C1&DEV_5811&SUBSYS_02491028&REV_70\4&14591D7E&0&2880 Driver: n/a Name: PCI standard RAM Controller Device ID: PCI\VEN_10DE&DEV_03B5&SUBSYS_02491028&REV_A1\3&2411E6FE&1&06 Driver: n/a Name: PCI standard RAM Controller Device ID: PCI\VEN_10DE&DEV_03AD&SUBSYS_02491028&REV_A1\3&2411E6FE&1&08 Driver: n/a Name: PCI standard RAM Controller Device ID: PCI\VEN_10DE&DEV_0270&SUBSYS_02491028&REV_A2\3&2411E6FE&1&48 Driver: n/a Name: Standard Dual Channel PCI IDE Controller Device ID: PCI\VEN_10DE&DEV_0265&SUBSYS_02491028&REV_A1\3&2411E6FE&1&68 Driver: n/a Name: NVIDIA GeForce 9800 GT Device ID: PCI\VEN_10DE&DEV_0605&SUBSYS_062D10DE&REV_A2\4&4BABE2A&0&0028 Driver: n/a Name: PCI standard RAM Controller Device ID: PCI\VEN_10DE&DEV_03B4&SUBSYS_02491028&REV_A1\3&2411E6FE&1&07 Driver: n/a Name: PCI standard RAM Controller Device ID: PCI\VEN_10DE&DEV_03AC&SUBSYS_02491028&REV_A1\3&2411E6FE&1&01 Driver: n/a Name: PCI standard PCI-to-PCI bridge Device ID: PCI\VEN_10DE&DEV_026F&SUBSYS_CB8410DE&REV_A2\3&2411E6FE&1&80 Driver: n/a Name: NVIDIA nForce PCI System Management Device ID: PCI\VEN_10DE&DEV_0264&SUBSYS_02491028&REV_A3\3&2411E6FE&1&51 Driver: n/a Name: NVIDIA GeForce 9800 GT Device ID: PCI\VEN_10DE&DEV_0605&SUBSYS_062D10DE&REV_A2\4&10BD3C89&0&0018 Driver: n/a Name: PCI standard RAM Controller Device ID: PCI\VEN_10DE&DEV_03B3&SUBSYS_02491028&REV_A1\3&2411E6FE&1&0E Driver: n/a Name: PCI standard RAM Controller Device ID: PCI\VEN_10DE&DEV_03AB&SUBSYS_02491028&REV_A1\3&2411E6FE&1&04 Driver: n/a Name: Standard Enhanced PCI to USB Host Controller Device ID: PCI\VEN_10DE&DEV_026E&SUBSYS_02491028&REV_A3\3&2411E6FE&1&59 Driver: n/a Name: PCI standard ISA bridge Device ID: PCI\VEN_10DE&DEV_0260&SUBSYS_02491028&REV_A3\3&2411E6FE&1&50 Driver: n/a Name: PCI standard RAM Controller Device ID: PCI\VEN_10DE&DEV_03BC&SUBSYS_02491028&REV_A1\3&2411E6FE&1&11 Driver: n/a Name: PCI standard RAM Controller Device ID: PCI\VEN_10DE&DEV_03B2&SUBSYS_02491028&REV_A1\3&2411E6FE&1&0D Driver: n/a Name: PCI standard RAM Controller Device ID: PCI\VEN_10DE&DEV_03AA&SUBSYS_02491028&REV_A1\3&2411E6FE&1&02 Driver: n/a Name: Standard OpenHCD USB Host Controller Device ID: PCI\VEN_10DE&DEV_026D&SUBSYS_02491028&REV_A3\3&2411E6FE&1&58 Driver: n/a Name: PCI standard RAM Controller Device ID: PCI\VEN_10DE&DEV_03BA&SUBSYS_02491028&REV_A1\3&2411E6FE&1&12 Driver: n/a Name: PCI standard RAM Controller Device ID: PCI\VEN_10DE&DEV_03B1&SUBSYS_02491028&REV_A1\3&2411E6FE&1&0C Driver: n/a Name: PCI standard RAM Controller Device ID: PCI\VEN_10DE&DEV_03A9&SUBSYS_02491028&REV_A1\3&2411E6FE&1&03 Driver: n/a Name: High Definition Audio Controller Device ID: PCI\VEN_10DE&DEV_026C&SUBSYS_02491028&REV_A2\3&2411E6FE&1&81 Driver: n/a Name: PCI standard PCI-to-PCI bridge Device ID: PCI\VEN_10DE&DEV_03B8&SUBSYS_000010DE&REV_A1\3&2411E6FE&1&28 Driver: n/a Name: PCI standard RAM Controller Device ID: PCI\VEN_10DE&DEV_03B0&SUBSYS_02491028&REV_A1\3&2411E6FE&1&0B Driver: n/a Name: PCI standard RAM Controller Device ID: PCI\VEN_10DE&DEV_03A8&SUBSYS_02491028&REV_A2\3&2411E6FE&1&05 Driver: n/a Name: NVIDIA nForce Networking Controller Device ID: PCI\VEN_10DE&DEV_0269&SUBSYS_02491028&REV_A3\3&2411E6FE&1&A0 Driver: n/a --------------- EVR Power Information --------------- Current Setting: {5C67A112-A4C9-483F-B4A7-1D473BECAFDC} (Quality) Quality Flags: 2576 Enabled: Force throttling Allow half deinterlace Allow scaling Decode Power Usage: 100 Balanced Flags: 1424 Enabled: Force throttling Allow batching Force half deinterlace Force scaling Decode Power Usage: 50 PowerFlags: 1424 Enabled: Force throttling Allow batching Force half deinterlace Force scaling Decode Power Usage: 0

    Read the article

  • How to setup Hadoop cluster so that it accepts mapreduce jobs from remote computers?

    - by drasto
    There is a computer I use for Hadoop map/reduce testing. This computer runs 4 Linux virtual machines (using Oracle virtual box). Each of them has Cloudera with Hadoop (distribution c3u4) installed and serves as a node of Hadoop cluster. One of those 4 nodes is master node running namenode and jobtracker, others are slave nodes. Normally I use this cluster from local network for testing. However when I try to access it from another network I cannot send any jobs to it. The computer running Hadoop cluster has public IP and can be reached over internet for another services. For example I am able to get HDFS (namenode) administration site and map/reduce (jobtracker) administration site (on ports 50070 and 50030 respectively) from remote network. Also it is possible to use Hue. Ports 8020 and 8021 are both allowed. What is blocking my map/reduce job submits from reaching the cluster? Is there some setting that I must change first in order to be able to submit map/reduce jobs remotely? Here is my mapred-site.xml file: <configuration> <property> <name>mapred.job.tracker</name> <value>master:8021</value> </property> <!-- Enable Hue plugins --> <property> <name>mapred.jobtracker.plugins</name> <value>org.apache.hadoop.thriftfs.ThriftJobTrackerPlugin</value> <description>Comma-separated list of jobtracker plug-ins to be activated. </description> </property> <property> <name>jobtracker.thrift.address</name> <value>0.0.0.0:9290</value> </property> </configuration> And this is in /etc/hosts file: 192.168.1.15 master 192.168.1.14 slave1 192.168.1.13 slave2 192.168.1.9 slave3

    Read the article

  • Troubleshooting PHP email sending?

    - by darkAsPitch
    I created a website that occasionally emails users when they register/change their password/etc. Every other person however cannot or does not receive the emails. They are telling me that they are not even hitting their spam folders. I don't know a ton about MX records or email sending, but when I "Edit DNS Zone" for this domain in particular there is 1 MX record listed there. How do you go about troubleshooting botched PHP mail actions? UPDATE: Here is my super-simple php mailing code: $subject = "Subject Here"; $message = "Emails Message"; $to = $verified_user_data["email_address"]; $headers = "From: [email protected]\r\n" . "Reply-To: [email protected]\r\n" . "X-Mailer: PHP/" . phpversion(); //returns true on success, false on failure $email_result = mail($to, $subject, $message, $headers); re: "are you saying that some do and some do not?" @ Jacob Yes, basically. I send the emails containing the user's login username/password using similar code above. And I sell to fairly tech-savvy people. About 50% of the time, my customers claim they cannot find their welcome emails in their inbox OR in their spam box. It's as if it never arrived. I have the largest problem with Yahoo email addresses accepting my emails or so it seems. re: "The MX record at your end doesn't factor in, although the SPF record (or lack of it) will. How much access and control do you have on the server itself?" @ John Gardeniers I rent a dedicated server from Codero. Running CentOS 5, WHM + cPanel. I have full root access to the entire thing. Don't know much about MX records and/or SPF records. I just want the PHP mail function to work. It doesn't say much about that on the php mail function's help page. re: "What are you using for the SMTP server?" @ JonLim No idea. I use the code above when I need to fire off an email to a loyal customer, and that's it. Do I need to be worrying about SMTP servers? re: "Could be many, many things. Can you describe how you're sending mail in your code? i.e. are you relaying off of another mail server somewhere, using the local sendmail or postfix? Any consistency in domains that can/cannot receive email? Do you have a PTR record setup from the IP address that you're sending mail out as? What about SPF records?" @ gravyface I just described my simple code above! I believe I have been having the most trouble with Yahoo domains, however "independent" domains (probably running spamassasin) ex. [email protected] as opposed to [email protected] seem to give a lot of trouble as well. I do not know if I have a PTR record setup from the IP address I'm sending my mail from. It's probably the same IP address that I setup my domain on, because I didn't do anything extra special. No idea about SPF records either, where can I go to create one? Side Note: It's a crying shame what havoc the spammers have brought upon our beloved email system.

    Read the article

  • Django + gunicorn + virtualenv + Supervisord issue

    - by Florian Le Goff
    Dear all, I have a strange issue with my virtualenv + gunicorn setup, only when gunicorn is launched via supervisord. I do realize that it may very well be an issue with my supervisord and I would appreciate any feedback on a better place to ask for help... In a nutshell : when I run gunicorn from my user shell, inside my virtualenv, everything is working flawlessly. I'm able to access all the views of my Django project. When gunicorn is launched by supervisord at the system startup, everything is OK. But, if I have to kill the gunicorn_django processes, or if I perform a supervisord restart, once that gunicorn_django has relaunched, every request is answered with a weird Traceback : (...) File "/home/hc/prod/venv/lib/python2.6/site-packages/Django-1.2.5-py2.6.egg/django/db/__init__.py", line 77, in connection = connections[DEFAULT_DB_ALIAS] File "/home/hc/prod/venv/lib/python2.6/site-packages/Django-1.2.5-py2.6.egg/django/db/utils.py", line 92, in __getitem__ backend = load_backend(db['ENGINE']) File "/home/hc/prod/venv/lib/python2.6/site-packages/Django-1.2.5-py2.6.egg/django/db/utils.py", line 50, in load_backend raise ImproperlyConfigured(error_msg) TemplateSyntaxError: Caught ImproperlyConfigured while rendering: 'django.db.backends.postgresql_psycopg2' isn't an available database backend. Try using django.db.backends.XXX, where XXX is one of: 'dummy', 'mysql', 'oracle', 'postgresql', 'postgresql_psycopg2', 'sqlite3' Error was: cannot import name utils Full stack available here : http://pastebin.com/BJ5tNQ2N I'm running... Ubuntu/maverick (up-to-date) Python = 2.6.6 virtualenv = 1.5.1 gunicorn = 0.12.0 Django = 1.2.5 psycopg2 = '2.4-beta2 (dt dec pq3 ext)' gunicorn configuration : backlog = 2048 bind = "127.0.0.1:8000" pidfile = "/tmp/gunicorn-hc.pid" daemon = True debug = True workers = 3 logfile = "/home/hc/prod/log/gunicorn.log" loglevel = "info" supervisord configuration : [program:gunicorn] directory=/home/hc/prod/hc command=/home/hc/prod/venv/bin/gunicorn_django -c /home/hc/prod/hc/gunicorn.conf.py user=hc umask=022 autostart=True autorestart=True redirect_stderr=True Any advice ? I've been stuck on this one for quite a while. It seems like some weird memory limit, as I'm not enforcing anything special : $ ulimit -a core file size (blocks, -c) 0 data seg size (kbytes, -d) unlimited scheduling priority (-e) 20 file size (blocks, -f) unlimited pending signals (-i) 16382 max locked memory (kbytes, -l) 64 max memory size (kbytes, -m) unlimited open files (-n) 1024 pipe size (512 bytes, -p) 8 POSIX message queues (bytes, -q) 819200 real-time priority (-r) 0 stack size (kbytes, -s) 8192 cpu time (seconds, -t) unlimited max user processes (-u) unlimited virtual memory (kbytes, -v) unlimited file locks (-x) unlimited Thank you.

    Read the article

  • Recommendations for distributed processing/distributed storage systems

    - by Eddie
    At my organization we have a processing and storage system spread across two dozen linux machines that handles over a petabyte of data. The system right now is very ad-hoc; processing automation and data management is handled by a collection of large perl programs on independent machines. I am looking at distributed processing and storage systems to make it easier to maintain, evenly distribute load and data with replication, and grow in disk space and compute power. The system needs to be able to handle millions of files, varying in size between 50 megabytes to 50 gigabytes. Once created, the files will not be appended to, only replaced completely if need be. The files need to be accessible via HTTP for customer download. Right now, processing is automated by perl scripts (that I have complete control over) which call a series of other programs (that I don't have control over because they are closed source) that essentially transforms one data set into another. No data mining happening here. Here is a quick list of things I am looking for: Reliability: These data must be accessible over HTTP about 99% of the time so I need something that does data replication across the cluster. Scalability: I want to be able to add more processing power and storage easily and rebalance the data on across the cluster. Distributed processing: Easy and automatic job scheduling and load balancing that fits with processing workflow I briefly described above. Data location awareness: Not strictly required but desirable. Since data and processing will be on the same set of nodes I would like the job scheduler to schedule jobs on or close to the node that the data is actually on to cut down on network traffic. Here is what I've looked at so far: Storage Management: GlusterFS: Looks really nice and easy to use but doesn't seem to have a way to figure out what node(s) a file actually resides on to supply as a hint to the job scheduler. GPFS: Seems like the gold standard of clustered filesystems. Meets most of my requirements except, like glusterfs, data location awareness. Ceph: Seems way to immature right now. Distributed processing: Sun Grid Engine: I have a lot of experience with this and it's relatively easy to use (once it is configured properly that is). But Oracle got its icy grip around it and it no longer seems very desirable. Both: Hadoop/HDFS: At first glance it looked like hadoop was perfect for my situation. Distributed storage and job scheduling and it was the only thing I found that would give me the data location awareness that I wanted. But I don't like the namename being a single point of failure. Also, I'm not really sure if the MapReduce paradigm fits the type of processing workflow that I have. It seems like you need to write all your software specifically for MapReduce instead of just using Hadoop as a generic job scheduler. OpenStack: I've done some reading on this but I'm having trouble deciding if it fits well with my problem or not. Does anyone have opinions or recommendations for technologies that would fit my problem well? Any suggestions or advise would be greatly appreciated. Thanks!

    Read the article

  • Magento hosting on a budget

    - by spa
    I have to do a setup for Magento. My constraint is primarily ease of setup and fault tolerance/fail over. Furthermore costs are an issue. I have three identical physical servers to get the job done. Each server node has an i7 quad core, 16GB RAM, and 2x3TB HD in a software RAID 1 configuration. Each node runs Ubuntu 12.04. right now. I have an additional IP address which can be routed to any of these nodes. The Magento shop has max. 1000 products, 50% of it are bundle products. I would estimate that max. 100 users are active at once. This leads me to the conclusion, that performance is not top priority here. My first setup idea One node (lb) runs nginx as a load balancer. The additional IP is used with domain name and routed to this node by default. Nginx distributes the load equally to the other two nodes (shop1, shop2). Shop1 and shop2 are configured equally: each server runs Apache2 and MySQL. The Mysqls are configured with master/slave replication. My failover strategy: Lb fails = Route IP to shop1 (MySQL master), continue. Shop1 fails = Lb will handle that automatically, promote MySQL slave on shop2 to master, reconfigure Magento to use shop2 for writes, continue. Shop2 fails = Lb will handle that automatically, continue. Is this a sane strategy? Has anyone done a similar setup with Magento? My second setup idea Another way to do it would be to use drbd for storing the MySQL data files on shop1 and shop2. I understand that in this scenario only one node/MySQL instance can be active and the other is used as hot standby. So in case shop1 fails, I would start up MySQL on shop2, route the IP to shop2, and continue. I like that as the MySQL setup is easier and the nodes can be configured 99% identical. So in this case the load balancer becomes useless and I have a spare server. My third setup idea The third way might be master-master replication of MySQL databases. However, in my optinion this might be tricky, as Magento isn't build for this scenario (e.g. conflicting ids for new rows). I would not do that until I have heard of a working example. Could you give me an advice which route to follow? There seems not one "good" way to do it. E.g. I read blog posts which describe a MySQL master/slave setup for Magento, but elsewhere I read, that data might get duplicated when the slave lags behind the master (e.g. when an order is placed, a customer might get created twice). I'm kind of lost here.

    Read the article

  • Partition table is corrupt

    - by Tim
    I have a corrupt the partition table on the laptop that is running Ubunutu 10.4. Before the partition table was corrupt I had the following partitions: 2 primary partitions: 1st - NTFS 2nd - Extended 4 logical partitons that are built within 2nd extended: 1st NTFS (68 Gib) 2nd Linux (19 Gib) 3rd Swap (1.4 Gib) 4th Linux (24 Gib) The physical order of these partitions was the following: ( 4th Linux ) - ( 1st NTFS ) - ( 2nd Linux ) - ( 3rd Swap ) The logical order of the partition was different: ( 1st NTFS ) - ( 2nd Linux ) - ( 3rd Swap ) ( 4th Linux ) NTFS partition was big and it resided between 2 Linux partitions, neither of these partitions had enough space to install Oracle 11g for my project with prof. Gamper and Markus Innerebner. Therefore, I decided to a) either move the NTFS partion to the left or b) remove it completely and extend partition where Linux resides. As I tool I have chosen GParted. But unfortunately it was not able to move the partition because he found that in NTFS partition there are some blocks that are referenced multiple times. Also it was not able to remove the partition neither, because in this case the partitions that follow it ( 2nd Linux ) - ( 3rd Swap ) have to be in his opinion also removed, because the organization of extended partition is a linked list. Since GParted was not able to do such thing I was trying to find another tool. I found diskdrake tool on PSLinuxOS distribution of linux. That tool silently deleted ( 1st NTFS ) partition and I thought that everything was fine. But diskdrake has damaged the partition in a way that I am not able either to boot from the hard disk nor to see the partitions with GParted and even with diskdrake itself! Fortunately I have a live CD of Ubuntu 8.10 and I am able to boot and see hard disk. I have 2 ideas how I can solve the problem: 1) Manually change disk partitions and point them to the correct partitions. 2) Create partition table with GParted that as much as possible is the same with the previous one I find the 2nd approach less time consuming but some data will be lost because of it is not possible to place borders of the partitions exactly how it was before. And moreover I am not sure if such approach would work, for example, if the OS is able to locate files after repartitioning. I feel like that it will but not 100% sure. Are there some ideas how the problem may be solved?

    Read the article

  • Backup a hosted Sharepoint

    - by David Mackintosh
    One of my customers has outsourced their Sharepoint and Exchange services to a hosted services provider. I believe it is a Sharepoint 2007 service. It is a shared hosting solution, so we do not have any kind of access to the server itself; we only have user-level and sharepoint-administrator-level access to the Sharepoint application. They have come to the point where they would like to have a copy of everything that is on the Sharepoint server. I have downloaded the Office Sharepoint Designer 2007, and it features three (!) ways to backup a Sharepoint server, none (!) of which work for me: File-Export-Personal Web Package: When selecting everything, it calculates a negative size. Barfs with No "content-type" in CGI environment error. File-Export-Sharepoint Template: barfs with a A World Wide Web browser, such as Windows Internet Explorer, is required to use this feature error. Site-Administration-Backup Web Site: wants to create the backup .cmp file on the sharepoint server itself. I don't have access to any servers on the same network so I can't redirect it to any form of the suggested \\server\place. Barfs with a The Web application at $URL could not be found. [...] error. Possibly moot because Google tells me that bad things happen using OSD to back up sites larger than 24MB (which this site is most definitely). So I called the helpdesk of the outsource provider, and got told that they recommend using OSD, but no they don't actually provide any application support for OSD (not that I blame them for that), but they could do a stsadm.exe backup and provide us with that, and OSD should be able to read the resulting cmp file. Then for authorization reasons they had my customer call them directly (since I can't authorize such an operation), and they told him that he didn't want a stsadm.exe backup, he wanted to get into an 'explorer view' and deal with things that way (they were vague). Google hasn't been much help in figuring out what an 'explorer view' is, let alone how I bring one up. The end goal of this operation is to have a backup of the site as it exists (hopefully today, but shortly anyways) in such a format that we don't need another sharepoint server to restore it to. Ie we'd like to be able to pick individual content directly out of this backup. We are not excessively concerned with things like formatting. We just want the documents. This is a fairly complex site with multiple subsites and multiple folders per subsite, so sitting there and manually downloading each file isn't really going to happen if there is a better easier way. So, my questions: Is the stsadm.exe backup what I want? If not, what do I want? If I manage to convince them that I do want the stsadm.exe backup, can I pick files out of the resulting backup file with OSD? If OSD isn't going to let me extract individual files, is there a tool I can use that can?

    Read the article

  • Why would a PCI scan fail because of components that are not even installed?

    - by Brandon
    Recently a PCI scan was run against a web server and the result was a failure. Some of the issues could be fixed, however others simply make no sense to me. The machine was a clean install, there are only two things running, the .NET 3.5 website and the dotDefender web application firewall. However there are several errors similar to: Web server vulnerability Impact: /servlet/SessionServlet: JRun or Netware WebSphere default servlet found. All default code should be removed from servers. Risk Factor: Medium/ CVSS2 Base Score: 6.4 CVE: CVE-2000-0539 I'm not sure what this is, but I can't find anything on the server that looks anything like this. Web server vulnerability Impact: /some.php?=PHPE9568F35- D428-11d2-A769-00AA001ACF42: PHP reveals potentially sensitive information via certain HTTP requests that contain specific QUERY strings. Risk Factor: Medium/ CVSS2 Base Score: 5.0 PHP is not installed. Trying to add that query string to any page does nothing because the application ignores it. And doing that phpVersion check results in a 404. Similar to this, there are dozens of errors related to JSP and Oracle that are also not installed. Web server vulnerability Impact: /admin/database/wwForum.mdb: Web Wiz Forums pre 7.5 is vulnerable to Cross-Site Scripting attacks. Default login/pass is Administrator/letmein Risk Factor: Medium/ CVSS2 Base Score: 4.0 There are several errors like this, telling me that Web Wiz Forums, Alan Ward A-Cart 2.0, IlohaMail, etc. are all vulnerable. These are not installed or referenced anywhere I can find. There are even references to pages that simply don't exist, like OpenAutoClassifieds. Can anyone point me in the right direction as to why these errors are showing up or where I might look to find these components if they are in fact installed? Note: This website and server are for a subdomain of the main website. The main website runs on a server that is running Apache/PHP, but I don't have access to that server. The report says the subdomain was the site being scanned, but is it possible for it to have scanned the main site as well?

    Read the article

  • Coldfusion 8 Application Crashes Under Heavy Load

    - by KM01
    Hello, We have a CF8 app that runs for 20-25 minutes before crashing under heavy load ~ 1200 users. This load is generated by our load testing tool: 1200 users ramped up in 5 mins (approx behavior of our users), running for an hour. We have this app on Solaris 10, Apache 2, JRun 4 and Oracle 10g. Java version is 1.6. During the initial load tests, the thread dumps pointed to monitor deadlocks that pointed to sessions. "jrpp-173": waiting to lock monitor 0x019fdc60 (object 0x6b893530, a java.util.Hashtable), which is held by "scheduler-1" "scheduler-1": waiting to lock monitor 0x026c3ce0 (object 0x6abe2f20, a coldfusion.monitor.memory.SessionMemoryMonitor$TopMemoryUsedSessions), which is held by "jrpp-167" "jrpp-167": waiting to lock monitor 0x019fdc60 (object 0x6b893530, a java.util.Hashtable), which is held by "scheduler-1" We increased the number of sessions relative to the number of CPUs (48 simultaneous threads against 32 CPUs), and the deadlock went away. While varying the simultaneous threads helped a little bit in terms of response time, the CF server still tanked in 20-25 minutes during all of these tests. We ran more thread dumps, and saw a thread locking a monitor, for e.g.: "jrpp-475" prio=3 tid=0x02230800 nid=0x2c5 runnable [0x4397d000] java.lang.Thread.State: RUNNABLE at java.util.HashMap.getEntry(HashMap.java:347) at java.util.HashMap.containsKey(HashMap.java:335) at java.util.HashSet.contains(HashSet.java:184) at coldfusion.monitor.memory.MemoryTracker.onAddObject(MemoryTracker.java:124) at coldfusion.monitor.memory.MemoryTrackerProxy.onReplaceValue(MemoryTrackerProxy.java:598) at coldfusion.monitor.memory.MemoryTrackerProxy.onPut(MemoryTrackerProxy.java:510) at coldfusion.util.CaseInsensitiveMap.put(CaseInsensitiveMap.java:250) at coldfusion.util.FastHashtable.put(FastHashtable.java:43) - locked <0x6f7e1a78> (a coldfusion.runtime.Struct) at coldfusion.runtime.CfJspPage._arrayset(CfJspPage.java:1027) at coldfusion.runtime.CfJspPage._arraySetAt(CfJspPage.java:2117) at cfvalidation2ecfc1052964961$funcSETUSERAUDITDATA.runFunction(/app/docs/apply/cfcs/validation.cfc:377) As you see in the last line above there were several references CFMs and CFCs, and the lines have "cflock" tags, which were scoped to the "application." We (the dev team) then changed them to be scoped to a "name". After more load tests, there is no locking going on and there no deadlocks, but now the application tanks in 7-10 minutes. We've gotten system, network and DB reports from the respective admins, and they are not being taxed; even watched the server stats with server monitor, top, prstat, ran sar reports, etc. So we believe it is an issue with the CF server or maybe the JVM. I am running out of ideas as to what else we can try. Disclaimer: I am not a CF developer or Admin. I am just running the load test, analyzing the reports, threads etc, and sharing the results with the dev and admin teams, and trying the next change, and so on. So far no dice. Has anyone run into something similar? How did you go about diagnosing and troubleshooting? All thoughts and pointers welcome. Thank you for your time! KM

    Read the article

  • Duplicate GET request from multiple IPs - can anyone explain this?

    - by dwq
    We've seen a pattern in our webserver access logs which we're having problem explaining. A GET request appears in the access log which is a legitimate, but private, url as part of normal e-commerce website use (by private, we mean there is a unique key in a url form variable generated specifically for that customer session). Then a few seconds later we get hit with an identical request maybe 10-15 times within the space of a second. The duplicate requests are all from different IP addresses. The UserAgent for the duplicates are all the same (but different from the original request). The reverse DNS lookup on the IPs for all the duplicates requests resolve to the same large hosting company. Can anyone think of a scenario what would explain this? EDIT 1 Here's an example that's probably anonymised beyond being any actual use, but it might give an idea of the sort of pattern we're seeing (it's from a search query as they sometimes get duplicated too): xx.xx.xx.xx - - [21/Jun/2013:21:42:57 +0100] "GET /search.html?search=widget&Submit=Search HTTP/1.0" 200 5475 "http://www.ourdomain.com/index.html" "Mozilla/5.0 (compatible; MSIE 10.0; Windows NT 6.1; WOW64; Trident/6.0)" xx.xx.xx.xx - - [21/Jun/2013:21:43:03 +0100] "GET /search.html?search=widget&Submit=Search HTTP/1.0" 200 5475 "" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_6_7) AppleWebKit/534.30 (KHTML, like Gecko) Chrome/12.0.742.91 Safari/534.30" xx.xx.xx.xx - - [21/Jun/2013:21:43:03 +0100] "GET /search.html?search=widget&Submit=Search HTTP/1.0" 200 5475 "" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_6_7) AppleWebKit/534.30 (KHTML, like Gecko) Chrome/12.0.742.91 Safari/534.30" xx.xx.xx.xx - - [21/Jun/2013:21:43:04 +0100] "GET /search.html?search=widget&Submit=Search HTTP/1.0" 200 5475 "" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_6_7) AppleWebKit/534.30 (KHTML, like Gecko) Chrome/12.0.742.91 Safari/534.30" xx.xx.xx.xx - - [21/Jun/2013:21:43:04 +0100] "GET /search.html?search=widget&Submit=Search HTTP/1.0" 200 5475 "" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_6_7) AppleWebKit/534.30 (KHTML, like Gecko) Chrome/12.0.742.91 Safari/534.30" xx.xx.xx.xx - - [21/Jun/2013:21:43:04 +0100] "GET /search.html?search=widget&Submit=Search HTTP/1.0" 200 5475 "" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_6_7) AppleWebKit/534.30 (KHTML, like Gecko) Chrome/12.0.742.91 Safari/534.30" xx.xx.xx.xx - - [21/Jun/2013:21:43:04 +0100] "GET /search.html?search=widget&Submit=Search HTTP/1.0" 200 5475 "" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_6_7) AppleWebKit/534.30 (KHTML, like Gecko) Chrome/12.0.742.91 Safari/534.30" xx.xx.xx.xx - - [21/Jun/2013:21:43:04 +0100] "GET /search.html?search=widget&Submit=Search HTTP/1.0" 200 5475 "" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_6_7) AppleWebKit/534.30 (KHTML, like Gecko) Chrome/12.0.742.91 Safari/534.30" xx.xx.xx.xx - - [21/Jun/2013:21:43:04 +0100] "GET /search.html?search=widget&Submit=Search HTTP/1.0" 200 5475 "" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_6_7) AppleWebKit/534.30 (KHTML, like Gecko) Chrome/12.0.742.91 Safari/534.30" xx.xx.xx.xx - - [21/Jun/2013:21:43:04 +0100] "GET /search.html?search=widget&Submit=Search HTTP/1.0" 200 5475 "" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_6_7) AppleWebKit/534.30 (KHTML, like Gecko) Chrome/12.0.742.91 Safari/534.30" xx.xx.xx.xx - - [21/Jun/2013:21:43:04 +0100] "GET /search.html?search=widget&Submit=Search HTTP/1.0" 200 5475 "" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_6_7) AppleWebKit/534.30 (KHTML, like Gecko) Chrome/12.0.742.91 Safari/534.30" UPDATE 2 Sometimes it is part of a checkout flow that's duplicated to I'd think twitter is unlikely.

    Read the article

  • What is the best server or Ip address to use for prolonged testing?

    - by eldorel
    I usually run uptime/latency tests against (and from) two servers that we own at different sites and until recently I've used the google dns servers as a control group. However, I've realized there is a potential problem with monitoring latency over extended periods of time. Almost all of the major service providers are using ANYCAST. For short tests this doesn't matter, but I need to run a set of tests for at least a week to try and catch an intermittent problem, and a change in the anycast priority while trying to test latency will cause the latency values for that server to change accordingly. Since I'm submitting graphs of this data to the ISP, I need to avoid/account for as many variables as possible. Spikes in the data for only one of the tested servers will only cause headaches. So can anyone recommend servers that: are not using anycast are owned by an entity that has a good uptime reputation (so they can't claim that the problem is server-side) will respond to ICMP requests Have an available service that runs on TCP/UDP (http or dns preferably) Wont consider an automated request every 10 minutes to be abuse Are accessible from anywhere in the world Are not local to the isp ( consider this an investigation of a hostile party ) Thanks in advance. Edit: added #6 and #7 above. More info: I am attempting to demonstrate a network problem for an entire node of our local ISP's network. They are actively blaming the issue on the equipment installed at the customer sites (our backup site is one of these), and refuse to escalate the problem. (even though 2 of these businesses have ISP provided modems, and all of us have completely different routers/services running) I am already quite familiar with the need to test an isp controlled IP, but they are actively dropping all packets targeted at gateway ip addresses and are only passing traffic addressed beyond the gateways. So to demonstrate the issue, I am sending packets to other systems in the same node, systems one hop away from the affected node, and systems completely outside the network. Unfortunately, all of the systems I have currently are either administered directly by myself, or by people who are biased enough to assist me. I need to have several systems included in the trace/log/graphs that are 100% not in the control of either myself or the isp so that the graphs have a stable/unbiased control group. These requirements are straight from legal, I'm just trying to make sure that everything that could be argued to invalidate the data is already covered. In Summary: I need to be able to show tcp/udp/icmp as 3 separate data points, and I need to be able to show the connections inside the local node, from local node to another nearby node, from those 2 nodes to the internet, and through the internet to both verifiable servers and a control group that I have no control over whatsoever. Again, Google/opendns/yahoo/msn/facebook/etc all use anycast, which throws the numbers off every time the anycast caches expire, so I need suggestions of an IP or server that is available for this type of testing. I was hoping someone knew of a system run by someone such as ISC or ICANN, or perhaps even a .gov server (fcc or nsa maybe?) setup for this type of testing. Thanks again.

    Read the article

  • How to tune system settings for mongoDB on Linux?

    - by jsh
    Trying to squeeze a lot out of one question here -- please bear with me. Although the MongoDB man pages make several useful recommendations about system settings like ulimit (http://docs.mongodb.org/manual/reference/ulimit/), and other production factors (http://docs.mongodb.org/manual/administration/production-notes/) they seem mysteriously silent on things like virtual memory and swap settings. The closest we get to a hint is that "...the operating system’s virtual memory subsystem manages MongoDB’s memory..." (http://docs.mongodb.org/manual/faq/fundamentals/#does-mongodb-require-a-lot-of-ram). Running the same job - high writes and high reads on about 10,000,000 records in a single collection -- on my 4-processor, 4GB RAM macbook and an 8-core ubuntu box with 64GB RAM I saw dramatically WORSE read performance on the linux box with factory settings, and could hear the disk constantly spinning, indicating high I/O and presumably swapping. Yes, other things were happening on the box, but there was plenty of free RAM, disk space, etc.; furthermore, I did not see evidence that Mongo was expanding to take advantage of all that free RAM as it is touted to do. Linux box default settings were as follows: vm.swappiness =60 vm.dirty_background_ratio = 10 vm.dirty_ratio = 20 vm.dirty_expire_centisecs =3000 vm.dirty_writeback_centisecs=500 I hazarded some guesses looking at docs and blogs for other types of databases (Oracle, MYSQL, etc.), experimented, and adjusted as below. vm.swappiness=10 vm.dirty_background_ratio=5 vm.dirty_ratio=5 vm.dirty_writeback_centisecs=250 vm.dirty_expire_centisecs=500 I saw some immediate apparent improvements in read time. However, when I ran my test jobs again, read performance continued to be painfully sluggish during heavy writes. Then, I REBUILT the collection from an available data source - and suddenly I can read at 1ms or less per record WHILE doing the write job! So the question is really two-fold: 1) What are appropriate VM settings for MongoDB on Linux? 2) (bonus) Does Mongo do some checking or optimization with the OS while data is being built? In other words, if I have built a large data set with suboptimal VM or I/O settings, does Mongo make assumptions during the memory-mapping process that will fail to take advantage of optimizations down the road? Obviously I don't fully grok memory mapping under the hood (I was hoping I wouldn't have to). Any help appreciated...thanks! -j

    Read the article

  • What happens to missed writes after a zpool clear?

    - by Kevin
    I am trying to understand ZFS' behaviour under a specific condition, but the documentation is not very explicit about this so I'm left guessing. Suppose we have a zpool with redundancy. Take the following sequence of events: A problem arises in the connection between device D and the server. This causes a large number of failures and ZFS therefore faults the device, putting the pool in degraded state. While the pool is in degraded state, the pool is mutated (data is written and/or changed.) The connectivity issue is physically repaired such that device D is reliable again. Knowing that most data on D is valid, and not wanting to stress the pool with a resilver needlessly, the admin instead runs zpool clear pool D. This is indicated by Oracle's documentation as the appropriate action where the fault was due to a transient problem that has been corrected. I've read that zpool clear only clears the error counter, and restores the device to online status. However, this is a bit troubling, because if that's all it does, it will leave the pool in an inconsistent state! This is because mutations in step 2 will not have been successfully written to D. Instead, D will reflect the state of the pool prior to the connectivity failure. This is of course not the normative state for a zpool and could lead to hard data loss upon failure of another device - however, the pool status will not reflect this issue! I would at least assume based on ZFS' robust integrity mechanisms that an attempt to read the mutated data from D would catch the mistakes and repair them. However, this raises two problems: Reads are not guaranteed to hit all mutations unless a scrub is done; and Once ZFS does hit the mutated data, it (I'm guessing) might fault the drive again because it would appear to ZFS to be corrupting data, since it doesn't remember the previous write failures. Theoretically, ZFS could circumvent this problem by keeping track of mutations that occur during a degraded state, and writing them back to D when it's cleared. For some reason I suspect that's not what happens, though. I'm hoping someone with intimate knowledge of ZFS can shed some light on this aspect.

    Read the article

  • How to diagnose cause, fix, or work around Adobe ActiveX / COM related error 0x80004005 progmaticall

    - by Streamline
    I've built a C# .NET app that uses the Adobe ActiveX control to display a PDF. It relies on a couple DLLs that get shipped with the application. These DLLs interact with the locally installed Adobe Acrobat or Adobe Acrobat Reader installed on the machine. This app is being used by some customer already and works great for nearly all users ( I check to see that the local machine is running at least version 9 of either Acrobat or Reader already ). I've found 3 cases where the app returns the error message "Error HRESULT E_FAIL has been returned from a call to a COM component" when trying to load (when the activex control is loading). I've checked one of these user's machines and he has Acrobat 9 installed and is using it frequently with no problems. It does appear that Acrobat 7 and 8 were installed at one time since there are entries for them in the registry along with Acrobat 9. I can't reproduce this problem locally, so I am not sure exactly which direction to go. The error at the top of the stacktrace is: System.Runtime.InteropServices.COMException (0x80004005): Error HRESULT E_FAIL has been returned from a call to a COM component. Some research into this error indicates it is a registry problem. Does anyone have a clue as to how to fix or work around this problem, or determine how to get to the core root of the problem? The full content of the error message is this: System.Runtime.InteropServices.COMException (0x80004005): Error HRESULT E_FAIL has been returned from a call to a COM component.    at System.Windows.Forms.UnsafeNativeMethods.CoCreateInstance(Guid& clsid, Object punkOuter, Int32 context, Guid& iid)    at System.Windows.Forms.AxHost.CreateWithoutLicense(Guid clsid)    at System.Windows.Forms.AxHost.CreateWithLicense(String license, Guid clsid)    at System.Windows.Forms.AxHost.CreateInstanceCore(Guid clsid)    at System.Windows.Forms.AxHost.CreateInstance()    at System.Windows.Forms.AxHost.GetOcxCreate()    at System.Windows.Forms.AxHost.TransitionUpTo(Int32 state)    at System.Windows.Forms.AxHost.CreateHandle()    at System.Windows.Forms.Control.CreateControl(Boolean fIgnoreVisible)    at System.Windows.Forms.Control.CreateControl(Boolean fIgnoreVisible)    at System.Windows.Forms.AxHost.EndInit()    at AcrobatChecker.Viewer.InitializeComponent()    at AcrobatChecker.Viewer..ctor()    at AcrobatChecker.Form1.btnViewer_Click(Object sender, EventArgs e)    at System.Windows.Forms.Control.OnClick(EventArgs e)    at System.Windows.Forms.Button.OnClick(EventArgs e)    at System.Windows.Forms.Button.OnMouseUp(MouseEventArgs mevent)    at System.Windows.Forms.Control.WmMouseUp(Message& m, MouseButtons button, Int32 clicks)    at System.Windows.Forms.Control.WndProc(Message& m)    at System.Windows.Forms.ButtonBase.WndProc(Message& m)    at System.Windows.Forms.Button.WndProc(Message& m)    at System.Windows.Forms.Control.ControlNativeWindow.OnMessage(Message& m)    at System.Windows.Forms.Control.ControlNativeWindow.WndProc(Message& m)    at System.Windows.Forms.NativeWindow.Callback(IntPtr hWnd, Int32 msg, IntPtr wparam, IntPtr lparam)

    Read the article

  • Ajax comments form in ASP.NET MVC2

    - by Artiom Chilaru
    I've been playing around with different aspects of MVC for some time now, and I've reached a situation where I'm not sure what would be the best way to solve a problem. I'm hoping that the SO community will help me out here :P I've seen a number of examples of Ajax.BeginForm on the internet, and it seems like a very nifty idea. E.g. you have a dropdown where you select a customer - and on selecting one it will load this client's details in some placeholder on the page. This works perfectly fine. But what to do if you want to tie in some validation in the box? Just hypothetically, imagine an article page, and user comments in the bottom. Below the comments area there's an ajax-y "Add comment" box. When a user adds a comment, it will appear in the comments area, below the last comment there. If I set the Ajax.BeginForm to Append the result of the call to the Comments area, it will work fine. But what if the data posted is not valid? Instead of appending a "successful" comment to the comments area I have to show the user validation errors. At this point I decided that the area INSIDE the Ajax.BeginForm will be inside a partial, and the form's submits will return this partial. Validation works fine. On each submit we reload the contents inside the form element. But how to add the successful comment to the top? Other things to consider: The comment form also has a "Preview" button. When the user clicks on Preview, I should load the rendered comment into a preview box. This will probably be inside the form area as well. I was thinking of using Json results instead. When the user submits the form, the server code will generate a Json object with a Success value, and html rendered partials as some properties. Something like { "success": true, "form": "<html form data>", "comment": "successful comment html to inject into the page" } This would be a perfect solution, except there's no way in MVC to render a partial into a string, inside the controller (separation of context, remember?). So.. what should I do then? Any "correct" way to implement this?

    Read the article

  • Why do many software projects fail today?

    - by TomTom
    As long as there are software projects, the world is wondering why they fail so often. I would like to know if there is a list or something equivalent which shows how many software projects fail today. Would be nice if there would be a comparison over the last 20 - 30 years. You can also add your top reason why a software project fails. Mine is "Requirements are poor or not even existing." which includes also "No (real) customer / user involved". EDIT: It is nearly impossible to clearly define the term "fail". Let's say that fail means: The project was more than 10% over budget and time. In my opinion the 10% + / - is a good range for an offer / tender. EDIT: Until now (Feb 11) it seems that most posters agree that a fail of the project is basically a failure of the project management (whatever fail means). But IMHO it comes out, that most developers are not happy with this situation. Perhaps because not the manager get penalized when a project was not successful, but the lazy, incompetent developer teams? When I read the posts I can also hear-out that there is a big "gap" between the developer side and the managment side. The expectations (perhaps also the requirements) seem to be so different, that a project cannot be successful in the end (over time / budget; users are not happy; not all first-prio features implemented; too many bugs because developers were forced to implement in too short timeframes ...) I',m asking myself: How can we improve it? Or do we have the possibility to improve it? Everybody seems to be unsatisfied with the way it goes now. Can we close the gap between these two worlds? Should we (the developers) go on strike and fight for "high quality reqiurements" and "realistic / iteration based time shedules"? EDIT: Ralph Westphal and Stefan Lieser have founded a new "community" called: Clean-Code-Developer. The aim of the group is to bring more professionalism into software engineering. Independently if a developer has a degree or tons of years of experience you can be part of this movement. Clean Code Developers live principles like SOLID every day. A professional developer is the biggest reviewer of his own work. And he has an internal value system which helps him to improve and become better. Check it out on: Clean Code Developer EDIT: Our company is doing at the moment a thing called "Application Development and Maintenance Benchmarking". This is a service offered by IBM to get a feedback from someone external on your software engineering process quality etc. When we get the results, I will tell you more about it.

    Read the article

  • Android Packaging Problem: resources.ap_ does not exist

    - by Galip
    I am trying to fix a problem in Eclipse for like 3 hours and I haven't made any progress. Tomorrow is the customer coming to look at my app, and I have no time left. This is really frustrating! This morning when I was coding and I wanted to run my app on my device Eclipse crashed all of a sudden. 'aapt.exe has stopped working' After this Eclipse wasn't starting anymore. It froze at the splash image. I looked on the internet and tried different solutions like going back to Java SE 6 update 20, changing .ini file etc. in the end reinstalling Eclipse did the job. Shortly after that the 'aapt.exe has stopped working' returned. I found a solution by changing my projects target. 1.5, 1.6, 2.2 doesn't matter, as long as it's different than the one before. Now I get the Error generating final archive: java.io.FileNotFoundException: C:\xxx\bin\resources.ap_ does not exist error. I tried clean but that doesn't work. Deleting and automatically regenarting R.java also didn't work. I ran the same code in Netbeans with the Android plugin and there it gives me the 'aapt.exe has stopped working' again :( Please guys, how can I fix this? Edit: I think I may have found the reason. These are the error lines in the console: org.xmlpull.v1.XmlPullParserException: Binary XML file line #3: <bitmap> requires a valid src attribute at android.graphics.drawable.BitmapDrawable.inflate(BitmapDrawable.java:341) at android.graphics.drawable.Drawable.createFromXmlInner(Drawable.java:779) at android.graphics.drawable.Drawable.createFromXml(Drawable.java:720) at com.android.layoutlib.bridge.ResourceHelper.getDrawable(ResourceHelper.java:150) at com.android.layoutlib.bridge.BridgeTypedArray.getDrawable(BridgeTypedArray.java:668) at android.view.View.<init>(View.java:1846) at android.view.View.<init>(View.java:1795) at android.view.ViewGroup.<init>(ViewGroup.java:282) at java.lang.reflect.Method.invoke(Unknown Source) at org.eclipse.equinox.launcher.Main.invokeFramework(Main.java:619) at org.eclipse.equinox.launcher.Main.basicRun(Main.java:574) at org.eclipse.equinox.launcher.Main.run(Main.java:1407) at org.eclipse.equinox.launcher.Main.main(Main.java:1383) [2011-01-17 16:37:20 - gegevens.xml] Unable to resolve drawable "com.android.layoutlib.utils.ResourceValue@267e33de" in attribute "background" The file it's talking about is 'bg.png'. It's a small png file which I repeat in a .xml file. <?xml version="1.0" encoding="utf-8"?> <bitmap xmlns:android="http://schemas.android.com/apk/res/android" android:src="@drawable/bg" android:tileMode="repeat" /> This file has worked from the first time without any problems. I deleted it from the drawable folder, waited for an error message, and then added it back. The red x next to the foldername got away, but still nothing different...

    Read the article

  • How do I add a listener that will work on individual Fieldset in Extjs? Clicking the "Add" button sh

    - by Nair
    Testing Window /*! * Ext JS Library 3.0.0 * Copyright(c) 2006-2009 Ext JS, LLC * [email protected] * http://www.extjs.com/license */ Ext.onReady(function(){ Ext.override( Ext.data.Store, { findExact: function( fld, val ) { var hit = null; this.each( function(rec) { if( rec.get(fld) == val ) { hit = rec; return false; }; } ); return hit; } }); var listAdded = 0; var addListBtn = { text: 'Add', handler: function() { Ext.getCmp('tab_list').add(getList()); Ext.getCmp('tab_list').doLayout(); } } var testData = new Ext.data.SimpleStore({ fields: ['no', 'name', 'address','phone','businessPhone'], data: [['68', 'Target','123 Valley Road','(345) 908-9087','(345) 908-9087'], ['69', 'Walmart','456 Main Road','(345) 908-9999','(345) 908-9990']] }); var getList = function() { listAdded++; var items = new Ext.form.FieldSet( { id:listAdded, title: listAdded, collapsible: true, layout: 'form', autoHeight: true, defaults: {width: 300}, defaultType: 'textfield', bodyStyle: 'padding:5px', labelWidth: 225, items: [ { xtype: 'combo', fieldLabel: 'Customer No', name: 'changescustomerNo', hiddenName: 'changescustomerNo', store: new Ext.data.SimpleStore({ fields: ['id','value'], data: [['68','Test1'],['69','Test2']] }), displayField: 'value', valueField: 'id', selectOnFocus: true, mode: 'local', editable: false, triggerAction: 'all', value: ' ', listeners:{select:{ fn:function(combo, value) { var m = testData.findExact( 'no', this.value ); if(m) { //alert(this.id); Ext.getCmp('currentName').setValue(m.get('name')); Ext.getCmp('currentAddress').setValue(m.get('address')); Ext.getCmp('currentTelephoneNumber').setValue(m.get('phone')); Ext.getCmp('currentBusTelephoneNumber').setValue(m.get('businessPhone')); } }//function }//select }//listeners },{ id: 'currentName', fieldLabel: 'Current Name', name: 'currentName', value: '' },{ id: 'currentAddress', width: 298, xtype: 'textarea', fieldLabel: 'Current Address', name: 'currentAddress', value: '' },{ id:'currentTelephoneNumber', fieldLabel: 'Current Telephone Number', name: 'currentTelephoneNumber', value: '' },{ id: 'currentBusTelephoneNumber', fieldLabel: 'Current Business Telephone Number', name: 'currentBusTelephoneNumber', value: '' } ] } ); return items; } var pnlMain = new Ext.Panel({ id: 'theForm', title: 'Sample List', bodyStyle:'padding:5px', autoWidth: true, frame: true, items: [{ xtype: 'tabpanel', id: 'tabpanel', activeTab: 0, height: 540, width: '100%', resizeTabs: true, tabWidth: 125, minTabWidth: 125, layoutOnTabChange: true, deferredRender: false, // Create all form elements on load defaults: { bodyStyle: 'padding:10px', autoScroll: true, layout: 'form', defaultType: 'textfield', labelWidth: 160 }, items:[{ id: 'tab_list', title: 'List', items: getList(), buttons: [ addListBtn ] }] }] }); pnlMain.render('main'); });

    Read the article

  • How to track deleted self-tracking entities in ObservableCollection without memory leaks

    - by Yannick M.
    In our multi-tier business application we have ObservableCollections of Self-Tracking Entities that are returned from service calls. The idea is we want to be able to get entities, add, update and remove them from the collection client side, and then send these changes to the server side, where they will be persisted to the database. Self-Tracking Entities, as their name might suggest, track their state themselves. When a new STE is created, it has the Added state, when you modify a property, it sets the Modified state, it can also have Deleted state but this state is not set when the entity is removed from an ObservableCollection (obviously). If you want this behavior you need to code it yourself. In my current implementation, when an entity is removed from the ObservableCollection, I keep it in a shadow collection, so that when the ObservableCollection is sent back to the server, I can send the deleted items along, so Entity Framework knows to delete them. Something along the lines of: protected IDictionary<int, IList> DeletedCollections = new Dictionary<int, IList>(); protected void SubscribeDeletionHandler<TEntity>(ObservableCollection<TEntity> collection) { var deletedEntities = new List<TEntity>(); DeletedCollections[collection.GetHashCode()] = deletedEntities; collection.CollectionChanged += (o, a) => { if (a.OldItems != null) { deletedEntities.AddRange(a.OldItems.Cast<TEntity>()); } }; } Now if the user decides to save his changes to the server, I can get the list of removed items, and send them along: ObservableCollection<Customer> customers = MyServiceProxy.GetCustomers(); customers.RemoveAt(0); MyServiceProxy.UpdateCustomers(customers); At this point the UpdateCustomers method will verify my shadow collection if any items were removed, and send them along to the server side. This approach works fine, until you start to think about the life-cycle these shadow collections. Basically, when the ObservableCollection is garbage collected there is no way of knowing that we need to remove the shadow collection from our dictionary. I came up with some complicated solution that basically does manual memory management in this case. I keep a WeakReference to the ObservableCollection and every few seconds I check to see if the reference is inactive, in which case I remove the shadow collection. But this seems like a terrible solution... I hope the collective genius of StackOverflow can shed light on a better solution. Thanks!

    Read the article

  • jQuery unbinding click event when maximum number of children are displayed

    - by RyanP13
    I have a personal details form that alows you to enter a certain number of dependants which is determined by the JSP application. The first dependant is visible and the user has the option to add dependants up to the maximum number. All other dependants are hidden by default and are displayed when a user clicks the 'Add another dependant button'. When the maximum number of dependants has been reached the button is greyed out and a message is generated via jQuery and displayed to tell the user exactly this. The issue i am having is when the maximum number of dependants has been reached the message is displayed but then the user can click the button to add more dependants and the message keeps on generating. I thought unbinding the click event would sort this but it seems to still be able to generate a second message. Here is the function i wrote to generate the message: // Dependant message function function maxDependMsg(msgElement) { // number of children can change per product, needs to be dynamic // count number of dependants in HTML var $dependLength = $("div.dependant").length; // add class maxAdd to grey out Button // create maximum dependants message and display, will not be created if JS turned off $(msgElement) .addClass("maxAdd") .after($('<p>') .addClass("maxMsg") .append("The selected web policy does not offer cover for more than " + $dependLength + " children, please contact our customer advisers if you wish discuss alternative policies available.")); } There is a hyperlink with a click event attached like so: $("a.add").click(function(){ // Show the next hidden table on clicking add child button $(this).closest('form').find('div.dependant:hidden:first').show(); // Get the number of hidden tables var $hiddenChildren = $('div.dependant:hidden').length; if ($hiddenChildren == 0) { // save visible state of system message $.cookies.set('cpqbMaxDependantMsg', 'visible'); // show system message that you can't add anymore dependants than what is on page maxDependMsg("a.add"); $(this).unbind("click"); } // set a cookie for the visible state of all child tables $('div.dependant').each(function(){ var $childCount = $(this).index('div.dependant'); if ($(this).is(':visible')) { $.cookies.set('cpqbTableStatus' + $childCount, 'visible'); } else { $.cookies.set('cpqbTableStatus' + $childCount, 'hidden'); } }); return false; }); All of the cookies code is for state saving when users are going back and forward through the process.

    Read the article

  • Entity Attribute Value Database vs. strict Relational Model Ecommerce question

    - by Dr. Zim
    It is safe to say that the EAV/CR database model is bad. That said, Question: What database model, technique, or pattern should be used to deal with "classes" of attributes describing e-commerce products which can be changed at run time? In a good E-commerce database, you will store classes of options (like TV resolution then have a resolution for each TV, but the next product may not be a TV and not have "TV resolution"). How do you store them, search efficiently, and allow your users to setup product types with variable fields describing their products? If the search engine finds that customers typically search for TVs based on console depth, you could add console depth to your fields, then add a single depth for each tv product type at run time. There is a nice common feature among good e-commerce apps where they show a set of products, then have "drill down" side menus where you can see "TV Resolution" as a header, and the top five most common TV Resolutions for the found set. You click one and it only shows TVs of that resolution, allowing you to further drill down by selecting other categories on the side menu. These options would be the dynamic product attributes added at run time. Further discussion: So long story short, are there any links out on the Internet or model descriptions that could "academically" fix the following setup? I thank Noel Kennedy for suggesting a category table, but the need may be greater than that. I describe it a different way below, trying to highlight the significance. I may need a viewpoint correction to solve the problem, or I may need to go deeper in to the EAV/CR. Love the positive response to the EAV/CR model. My fellow developers all say what Jeffrey Kemp touched on below: "new entities must be modeled and designed by a professional" (taken out of context, read his response below). The problem is: entities add and remove attributes weekly (search keywords dictate future attributes) new entities arrive weekly (products are assembled from parts) old entities go away weekly (archived, less popular, seasonal) The customer wants to add attributes to the products for two reasons: department / keyword search / comparison chart between like products consumer product configuration before checkout The attributes must have significance, not just a keyword search. If they want to compare all cakes that have a "whipped cream frosting", they can click cakes, click birthday theme, click whipped cream frosting, then check all cakes that are interesting knowing they all have whipped cream frosting. This is not specific to cakes, just an example.

    Read the article

  • What Is The Best Database For Delphi Desktop Applications That Supports Stored Procedures?

    - by Cape Cod Gunny
    I started with Turbo Pascal 3, went to TP5, Bought TP6 called Borland the next day and downgraded to TP5.5. Bought Delphi 3, and now have Delphi 5 Enterprise. I sort of lost interest in writing code about 4-5 years ago for two reasons; Spent all day writing ASP & SQL for someone else. PC Techniques magazine went away. I've got a few programs in the shareware market that are solid performers but are in need of serious updating. I love Delphi or did when it was Borland (before Borland bought DBase and all the other crap), I'd like to salvage as much of my D5E code as possible but I doubt I can. I plan on upgrading to Delphi 2010. My next software release needs to interact with a database. I'm very proficient with MS Sql and like to put all of the database code in stored procedures. What is the best database choice that interacts well with Delphi, allows stored procedures and is so easy to deploy that even the Geico gecko could deploy it? 10/25/2009 18:53 PM EST Re-Opened After Reading Install Docs for Delphi 2010 I downloaded a trial version of Delphi 2010 and unzipped the install. I've been reading the install docs included in the package. I started with the install.htm inside the zip package. install.htm wisely tells you to see the following two articles: Installation Notes: http://edn.embarcadero.com/article/39754 Release Notes: http://edn.embarcadero.com/article/39758 the release notes state the following... MSSQL driver requires the installation of the SQL Native Client. SQL Native Client 2008 is required for dbxmss.dll. SQL Native Client 2005 is required for dbxmss9.dll I checked my machine to see if SQL Native Client is installed. Nope. I wasn't done reading the docs so I made a note to install SQL Native Client. I googled dbxmss.dll and dbxmss9.dll and found a very interesting thread on the Embarcadero forums. read thread here. After reading this thread and some careful thought I don't think I will be using Microsoft SQL Express. I can't rely on my customers having the right drivers installed. So, I'm back to looking for a different solution. If I'm selling a $40 product to the general masses I need to have a bulletproof solution that doesn't require my brand new customer to update their machine before my software will work.

    Read the article

  • jQuery Ajax (beforeSend and complete) working properly on FireFox but not on IE8 and Chrome

    - by Farhan Zia
    I am using jQuery ajax version 1.4.1 in my MVC application (though the issue I am discussing was same with the old jQuery version 3.2.1) as well, to check during customer registration if the username is already registered. As the user clicks on the "Check Availibility" button, I am showing a busy image in place of the check button (actually hiding the check button and showing the image) while checking the availibility on the server and then displaying a message. It is a Sync call (async: false) and I used beforeSend: and complete: to show and hide the busy image and the check button. This thing is working well on Firefox but in IE 8 and Chrome, neither the busy image appear nor the check button hides rather the check button remained pressed as the whole thing has hanged. The available and not available messages appear correctly though. Below is the code: HTML in a User Control (ascx): (i have replaced the angular braces with square below) [div id="available"]This Username is Available [div id="not_available"]This Username is not available [input id="txtUsername" name="txtUsername" type="text" size="50" /]  [button id="check" name="check" type="button"]Check Availability[/button] [img id="busy" src="/Content/Images/busy.gif" /] On the top of this user control, I am linking an external javascript file that has the following code: $(document).ready(function() { $('img#busy').hide(); $('div#available').hide(); $('div#not_available').hide(); $("button#check").click(function() { var available = checkUsername($("input#txtUsername").val()); if (available == "1") { $("div#available").show(); $("div#not_available").hide(); } else { $("div#available").hide(); $("div#not_available").show(); } }); }); function checkUsername(username) { $.ajax({ type: "POST", url: "/SomeController/SomeAction", data: { "id": username }, timeout: 3000, async: false, beforeSend: function() { $("button#check").hide(); $("img#busy").show(); }, complete: function() { $("button#check").show(); $("img#busy").hide(); }, cache: false, success: function(result) { return result; }, error: function(error) { $("img#busy").hide(); $("button#check").show(); alert("Some problems have occured. Please try again later: " + error); } }); }

    Read the article

  • xslt broken: pattern does not match

    - by krisvandenbergh
    I'm trying to query an xml file using the following xslt: <xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:rdfs="http://www.w3.org/2000/01/rdf-schema#" xmlns:bpmn="http://dkm.fbk.eu/index.php/BPMN_Ontology"> <!-- Participants --> <xsl:template match="/"> <html> <body> <table> <xsl:for-each select="Package/Participants/Participant"> <tr> <td><xsl:value-of select="ParticipantType" /></td> <td><xsl:value-of select="Description" /></td> </tr> </xsl:for-each> </table> </body> </html> </xsl:template> </xsl:stylesheet> Here's the contents of the xml file: <?xml version="1.0" encoding="utf-8"?> <?xml-stylesheet type="text/xsl" href="xpdl2bpmn.xsl"?> <Package xmlns="http://www.wfmc.org/2008/XPDL2.1" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema" Id="25ffcb89-a9bf-40bc-8f50-e5afe58abda0" Name="1 price setting" OnlyOneProcess="false"> <PackageHeader> <XPDLVersion>2.1</XPDLVersion> <Vendor>BizAgi Process Modeler.</Vendor> <Created>2010-04-24T10:49:45.3442528+02:00</Created> <Description>1 price setting</Description> <Documentation /> </PackageHeader> <RedefinableHeader> <Author /> <Version /> <Countrykey>CO</Countrykey> </RedefinableHeader> <ExternalPackages /> <Participants> <Participant Id="008af9a6-fdc0-45e6-af3f-984c3e220e03" Name="customer"> <ParticipantType Type="RESOURCE" /> <Description /> </Participant> <Participant Id="1d2fd8b4-eb88-479b-9c1d-7fe6c45b910e" Name="clerk"> <ParticipantType Type="ROLE" /> <Description /> </Participant> </Participants> </Package> Despite, the simple pattern, the foreach doesn't work. What is wrong with Package/Participants/Participant ? What do I miss here? Thanks a lot!

    Read the article

< Previous Page | 718 719 720 721 722 723 724 725 726 727 728 729  | Next Page >