Search Results

Search found 13664 results on 547 pages for 'storage engine'.

Page 221/547 | < Previous Page | 217 218 219 220 221 222 223 224 225 226 227 228  | Next Page >

  • HOWTO Turn off SPARC T4 or Intel AES-NI crypto acceleration.

    - by darrenm
    Since we released hardware crypto acceleration for SPARC T4 and Intel AES-NI support we have had a common question come up: 'How do I test without the hardware crypto acceleration?'. Initially this came up just for development use so developers can do unit testing on a machine that has hardware offload but still cover the code paths for a machine that doesn't (our integration and release testing would run on all supported types of hardware anyway).  I've also seen it asked in a customer context too so that we can show that there is a performance gain from the hardware crypto acceleration, (not just the fact that SPARC T4 much faster performing processor than T3) and measure what it is for their application. With SPARC T2/T3 we could easily disable the hardware crypto offload by running 'cryptoadm disable provider=n2cp/0'.  We can't do that with SPARC T4 or with Intel AES-NI because in both of those classes of processor the encryption doesn't require a device driver instead it is unprivileged user land callable instructions. Turns out there is away to do this by using features of the Solaris runtime loader (ld.so.1). First I need to expose a little bit of implementation detail about how the Solaris Cryptographic Framework is implemented in Solaris 11.  One of the new Solaris 11 features of the linker/loader is the ability to have a single ELF object that has multiple different implementations of the same functions that are selected at runtime based on the capabilities of the machine.  The alternate to this is having the application coded to call getisax() and make the choice itself.  We use this functionality of the linker/loader when we build the userland libraries for the Solaris Cryptographic Framework (specifically libmd.so, and the unfortunately misnamed due to historical reasons libsoftcrypto.so) The Solaris linker/loader allows control of a lot of its functionality via environment variables, we can use that to control the version of the cryptographic functions we run.  To do this we simply export the LD_HWCAP environment variable with values that tell ld.so.1 to not select the HWCAP section matching certain features even if isainfo says they are present.  For SPARC T4 that would be: export LD_HWCAP="-aes -des -md5 -sha256 -sha512 -mont -mpul" and for Intel systems with AES-NI support: export LD_HWCAP="-aes" This will work for consumers of the Solaris Cryptographic Framework that use the Solaris PKCS#11 libraries or use libmd.so interfaces directly.  It also works for the Oracle DB and Java JCE.  However does not work for the default enabled OpenSSL "t4" or "aes-ni" engines (unfortunately) because they do explicit calls to getisax() themselves rather than using multiple ELF cap sections. However we can still use OpenSSL to demonstrate this by explicitly selecting "pkcs11" engine  using only a single process and thread.  $ openssl speed -engine pkcs11 -evp aes-128-cbc ... type 16 bytes 64 bytes 256 bytes 1024 bytes 8192 bytes aes-128-cbc 54170.81k 187416.00k 489725.70k 805445.63k 1018880.00k $ LD_HWCAP="-aes" openssl speed -engine pkcs11 -evp aes-128-cbc ... type 16 bytes 64 bytes 256 bytes 1024 bytes 8192 bytes aes-128-cbc 29376.37k 58328.13k 79031.55k 86738.26k 89191.77k We can clearly see the difference this makes in the case where AES offload to the SPARC T4 was disabled. The "t4" engine is faster than the pkcs11 one because there is less overhead (again on a SPARC T4-1 using only a single process/thread - using -multi you will get even bigger numbers). $ openssl speed -evp aes-128-cbc ... type 16 bytes 64 bytes 256 bytes 1024 bytes 8192 bytes aes-128-cbc 85526.61k 89298.84k 91970.30k 92662.78k 92842.67k Yet another cool feature of the Solaris linker/loader, thanks Rod and Ali. Note these above openssl speed output is not intended to show the actual performance of any particular benchmark just that there is a significant improvement from using hardware acceleration on SPARC T4. For cryptographic performance benchmarks see the http://blogs.oracle.com/BestPerf/ postings.

    Read the article

  • Don’t string together XML

    - by KyleBurns
    XML has been a pervasive tool in software development for over a decade.  It provides a way to communicate data in a manner that is simple to understand and free of platform dependencies.  Also pervasive in software development is what I consider to be the anti-pattern of using string manipulation to create XML.  This usually starts with a “quick and dirty” approach because you need an XML document and looks like (for all of the examples here, we’ll assume we’re writing the body of a method intended to take a Contact object and return an XML string): return string.Format("<Contact><BusinessName>{0}</BusinessName></Contact>", contact.BusinessName);   In the code example, I created (or at least believe I created) an XML document representing a simple contact object in one line of code with very little overhead.  Work’s done, right?  No it’s not.  You see, what I didn’t realize was that this code would be used in the real world instead of my fantasy world where I own all the data and can prevent any of it containing problematic values.  If I use this code to create a contact record for the business “Sanford & Son”, any XML parser will be incapable of processing the data because the ampersand is special in XML and should have been encoded as &amp;. Following the pattern that I have seen many times over, my next step as a developer is going to be to do what any developer in his right mind would do – instruct the user that ampersands are “bad” and they cannot be used without breaking computers.  This may work in many cases and is often accompanied by logic at the UI layer of applications to block these “bad” characters, but sooner or later someone is going to figure out that other applications allow for them and will want the same.  This often leads to the creation of “cleaner” functions that perform a replace on the strings for every special character that the person writing the function can think of.  The cleaner function will usually grow over time as support requests reveal characters that were missed in the initial cut.  Sooner or later you end up writing your own somewhat functional XML engine. I have never been told by anyone paying me to write code that they would like to buy a somewhat functional XML engine.  My employer/customer’s needs have always been for something that may use XML, but ultimately is functionality that drives business value. I’m not going to build an XML engine. So how can I generate XML that is always well-formed without writing my own engine?  Easy – use one of the ones provided to you for free!  If you’re in a shop that still supports VB6 applications, you can use the DomDocument or MXXMLWriter object (of the two I prefer MXXMLWriter, but I’m not going to fully describe either here).  For .Net Framework applications prior to the 3.5 framework, the code is a little more verbose than I would like, but easy once you understand what pieces are required:             using (StringWriter sw = new StringWriter())             {                 using (XmlTextWriter writer = new XmlTextWriter(sw))                 {                     writer.WriteStartDocument();                     writer.WriteStartElement("Contact");                     writer.WriteElementString("BusinessName", contact.BusinessName);                     writer.WriteEndElement(); // end Contact element                     writer.WriteEndDocument();                     writer.Flush();                     return sw.ToString();                 }             }   Looking at that code, it’s easy to understand why people are drawn to the initial one-liner.  Lucky for us, the 3.5 .Net Framework added the System.Xml.Linq.XElement object.  This object takes away a lot of the complexity present in the XmlTextWriter approach and allows us to generate the document as follows: return new XElement("Contact", new XElement("BusinessName", contact.BusinessName)).ToString();   While it is very common for people to use string manipulation to create XML, I’ve discussed here reasons not to use this method and introduced powerful APIs that are built into the .Net Framework as an alternative.  I’ve given a very simplistic example here to highlight the most basic XML generation task.  For more information on the XmlTextWriter and XElement APIs, check out the MSDN library.

    Read the article

  • SD Card reader not working on Sony Vaio

    - by TessellatingHeckler
    This laptop (Sony Vaio VGN-Z31MN/B PCG-6z2m) has been installed with Windows 7 64 bit, all the drivers from Sony's VAIO site are installed, and everything in Device Manager both (a) has a driver and (b) shows as working, no exclamation marks or warnings. "Hide empty drives" in Folder options is disabled so the card reader appears, but will not read the card ("please insert a disk in drive O:"). Previously, when the laptop had Windows XP on it, it could read the same card. Also, Windows update suggested driver ("SD Card Reader") doesn't work, Ricoh own drivers install properly but do the same behaviour. Other 3rd party driver suggestions from forums (Acer and Texas-Instruments FlashMedia) do not seem to install properly. I would post the PCI id if I had it, but it was just showing up as rimsptsk\diskricohmemorystickstorage (while it had the Ricoh Driver installed). Edit: If there are any lower level diagnostic utlities which might shed more light on it I'd welcome hearing of them. Anything which might show get it to put troubleshooting logs in the event log or identify chipsets or whatever... Update: Device details are: SD\VID_03&OID_5344&PID_SD04G&REV_8.0\5&4617BC3&0&0 : SD Memory Card PCI\VEN_8086&DEV_2934&SUBSYS_9025104D&REV_03\3&21436425&0&E8: Intel(R) ICH9 Family USB Universal Host Controller - 2934 PCI\VEN_1180&DEV_0476&SUBSYS_9025104D&REV_BA\4&1BD7BFCD&0&20F0: Ricoh R/RL/5C476(II) or Compatible CardBus Controller RIMSPTSK\DISK&VEN_RICOH&PROD_MEMORYSTICKSTORAGE&REV_1.00\MS0001: SD Storage Card PCI\VEN_1180&DEV_0592&SUBSYS_9025104D&REV_11\4&1BD7BFCD&0&24F0: Ricoh Memory Stick Host Controller WPDBUSENUMROOT\UMB\2&37C186B&1&STORAGE#VOLUME#_??_RIMSPTSK#DISK&VEN_RICOH&PROD_MEMORYSTICKSTORAGE&REV_1.00#MS0001#: O:\ STORAGE\VOLUME\{C82A81B8-5A4F-11E0-AACC-806E6F6E6963}#0000000000100000: Generic volume PCI\VEN_1180&DEV_0822&SUBSYS_9025104D&REV_21\4&1BD7BFCD&0&22F0: SDA Standard Compliant SD Host Controller ROOT\LEGACY_FVEVOL\0000 : Bitlocker Drive Encryption Filter Driver PCI\VEN_1180&DEV_0832&SUBSYS_9025104D&REV_04\4&1BD7BFCD&0&21F0: Ricoh 1394 OHCI Compliant Host Controller Now going to search for drivers for that.

    Read the article

  • cPanel web servers mounting home partition to a NAS or SAN

    - by Scott
    Hello, I currently have 2 cPanel web servers that are little 1RU dual cpu quad core xeons. They have a lot of resources for processing and handling web requests, and never exceed more than 10% cpu usage. They also have plenty of RAM. The problem is though that they both have RAID 1 160Gb SAS hard disk drives in them that are 75% full, and growing by the day. I didnt think that the amount of disk usage would be so high, but due to the nature of the sites hosted, this has become an issue. The easy fix would be just to upgrade the hard drives to something bigger (probably not of the SAS variety), but I am thinking of keeping the current machines as "processing servers" and buying a central "storage server" with about 12TB of storage. The /home/ partition on each of the 1RU servers would be mounted to a NAS or SAN point on this central storage server. My questions are: - Has anyone got a cPanel setup where they mount /home/ to a NAS or SAN elsewhere? If so, can you provide details as to what you did and how it went :) - Any recommendations on networking? Is gigabit ethernet enough? Is TCP/IP going to be a noticable performance problem? Anyone used a TOE key? - Anyone benchmarked or had any performance issues with SAN over NAS? Any help greatly appreciated. Scott

    Read the article

  • zfs setup question

    - by Staale
    Currently I have a linux storage box and server with 4x750gb harddrives in raid-5 with ext3. I have ordered 3x1.5tb disks to upgrade this. Here is my planned upgrade: Backup: Format the 1.5 tb disks Copy all data from the raid-5 disks to the 1.5tb disks Destroy the raid-5 array. New setup: Create a VirtualBox system and install Nexenta (OpenSolaris + ubuntu) on it. Create a zfs pool with zraid1 with the 4 750gb disks. Copy from 1.5tb disks to the virtualbox zfs pool Format the 1.5tb disks. Replace 3 off the 750gb disks with 1.5tb disks. Reuse the 750gb disks elsewhere. The reason I wish to use one 750gb disk is since I can't grow the disk count in a raidz array, and this gives me the option off replacing that disk later for an extra 750gb storage. Would the ZFS performance be good running through virtualbox? Or will the performance overhead be too large? Will I get 1.5tb+1.5tb+750gb storage on the zraid? Or just 750gbx3 until all disks are 1.5tb?

    Read the article

  • ZFS + FreeBSD + virtualbox

    - by John
    Hi, I'm configuring a FreeBSD server hosting virtualbox serving half dozen mission critical busy mail servers. I just learned ZFS, I'm quite attracted, but have a few questions: what is the CPU overhead of ZFS? I googled and found little (or no) benchmark for that. from what I learned, when ZFS updates files, it keeps the old file as snapshot, and write the updated part for the new version. However that would mean for each snapshot it keeps that require significant storage overhead. How much is this storage overhead? For example, suppose I have 2TB usable space, how much space can actually be used for the latest version of files one year later? is FreeBSD with ZFS hosting virtualbox serving half dozen busy guest mission critical mail servers a reasonable combination? Anything particular to be careful with? And can I still choose ZFS for the guest OSs? This is because I may build another identical such box for redundancy, and will need to do some mirroring between each pair of the guest systems across the boxes. I'm trying to configure a Dell R710 for this. From what I learned, I shouldn't choose any RAID at all, is that true? In that case, are the drives still arrive hot swappable? this may sounds a bit pathetic, but since I have no experience with ZFS at all, and this is a mission critical server, so just ask just in case: I'm choosing twin Intel L5630 processors, and 6 x 600GB 15K RPM Serial-Attach SCSI drives. If I need more space in the future, I would just hot swap some drivers with larger capacity to expand the storage. There is no problem with these, right?

    Read the article

  • How to get the permissions right for /dev/raw1394

    - by Mark0978
    I recently upgraded one of my ubuntu machines to Karmic and I'm having trouble getting the permissions of /dev/raw1394 set to 0666. They only thing this machine is used for is recording audio from a firepod which uses /dev/raw1394 via jackd and there are no other FireWire devices connected, so security around this device is not really an issue. If I run as root, everything works as expected, but I have some folks that run the recorder that I don't want to have root access. However, I can't figure out which lines setup the perms I've tied this: /etc/udev/permissions.d/raw1394.rules:raw1394:root:root:0666 And I have this setup (default install) /lib/udev/rules.d/75-persistent-net-generator.rules:SUBSYSTEMS=="ieee1394", ENV{COMMENT}="Firewire device $attr{host_id})" /lib/udev/rules.d/75-cd-aliases-generator.rules:# the "path" of usb/ieee1394 devices changes frequently, use "id" /lib/udev/rules.d/75-cd-aliases-generator.rules:ACTION=="add", SUBSYSTEM=="block", SUBSYSTEMS=="usb|ieee1394", ENV{ID_CDROM}=="?*", ENV{GENERATED}!="?*", \ /lib/udev/rules.d/60-persistent-storage-tape.rules:KERNEL=="st*[0-9]|nst*[0-9]", ATTRS{ieee1394_id}=="?*", ENV{ID_SERIAL}="$attr{ieee1394_id}", ENV{ID_BUS}="ieee1394" /lib/udev/rules.d/50-udev-default.rules:# FireWire (deprecated dv1394 and video1394 drivers) /lib/udev/rules.d/50-udev-default.rules:KERNEL=="dv1394-[0-9]*", NAME="dv1394/%n", GROUP="video" /lib/udev/rules.d/50-udev-default.rules:KERNEL=="video1394-[0-9]*", NAME="video1394/%n", GROUP="video" /lib/udev/rules.d/60-persistent-storage.rules:KERNEL=="sd*[!0-9]|sr*", ATTRS{ieee1394_id}=="?*", SYMLINK+="disk/by-id/ieee1394-$attr{ieee1394_id}" /lib/udev/rules.d/60-persistent-storage.rules:KERNEL=="sd*[0-9]", ATTRS{ieee1394_id}=="?*", SYMLINK+="disk/by-id/ieee1394-$attr{ieee1394_id}-part%n" And I find these lines in /var/log/syslog Apr 30 09:11:30 record kernel: [ 3.284010] ieee1394: Node added: ID:BUS[0-00:1023] GUID[000a9200c7062266] Apr 30 09:11:30 record kernel: [ 3.284195] ieee1394: Host added: ID:BUS[0-01:1023] GUID[00d0035600a97b9f] Apr 30 09:11:30 record kernel: [ 18.372791] ieee1394: raw1394: /dev/raw1394 device initialized What I can't figure out, is which line actually creates that raw1394 device in the first place. How do you get /dev/raw1394 to have permissions 0666?

    Read the article

  • ZFS: Mirror vs. RAID-Z

    - by John Clayton
    I'm planning on building a file server using OpenSolaris and ZFS that will provide two primary services - be an iSCSI target for XenServer virtual machines & be a general home file server. The hardware I'm looking at includes 2x 4-port SATA controllers, 2x small boot drives (one on each controller), and 4x big drives for storage. This allows one free port per controller for upgrading the array down the road. Where I'm a little confused is how to setup the storage drives. For performance, mirroring appears to be king. I'm having a hard time seeing what the benefit would be of using RAIDZ over mirroring would be. With this setup I can see two options - two mirrored pools in one stripe, or RAIDZ2. Both should protect against 2 drive failures, and/or one controller failure...the only benefit of RAIDZ2 would be that any 2 drives could fail. The storage should be 50% of capacity in both cases, but the first should have much better performance, right? The other thing I'm trying to wrap my mind around is the benefit of mirrored arrays with more than two devices. For data integrity what, if any, would be the benefit of a RAIDZ over a three-way mirror? Since ZFS maintains file integrity what does RAIDZ bring to the table...doesn't ZFS's integrity checks negate the value of RAIDZ's parity?

    Read the article

  • 0 connected nodes in datastax opscenter

    - by gansbrest
    Installed opscenterd on the separate node outside of the cluster, but within firewall ( aws security group ). Tested all possible ports between agents and opcenter server. No errors in the log.. 2013-10-30 01:07:23+0000 [FC_Cluster] INFO: Initializing event storage. 2013-10-30 01:07:23+0000 [FC_Cluster] INFO: Attempting to load all persisted alert rules 2013-10-30 01:07:23+0000 [FC_Cluster] INFO: Done loading persisted alert rules 2013-10-30 01:07:23+0000 [FC_Cluster] INFO: Done initializing event storage. 2013-10-30 01:07:23+0000 [FC_Cluster] INFO: Done loading persisted scheduled job descriptions 2013-10-30 01:07:23+0000 [FC_Cluster] INFO: OpsCenter starting up. 2013-10-30 01:07:23+0000 [] INFO: Finished starting new cluster services for FC_Cluster 2013-10-30 01:08:04+0000 [FC_Cluster] INFO: Agent for ip 10.34.10.185 is version u'3.2.2' 2013-10-30 01:08:04+0000 [FC_Cluster] INFO: Agent for ip 10.32.37.251 is version u'3.2.2' 2013-10-30 01:08:04+0000 [FC_Cluster] INFO: Agent for ip 10.82.226.252 is version u'3.2.2' The most interesting part that I can see some data in the opscenter UI, when I stop agents, there is no data displayed, when I start - it show up again, but at the same time it shows 0 connected nodes. Storage capacity is even funnier - 3 of 0 nodes.. Any ideas why that could be happening?

    Read the article

  • VMware vSphere 4.1 and BackupExec 2010

    - by Josh
    I'm sure a common problem with most shops is backups, their size, and the window in which you have to back up the data. What we are working with: VMware vSphere 4.1 Cluster PS4000XV Equallogic Storage Array (1.6TB Volume dedicated for Backup to Disk) Physical Backup Server with a single LTO4 drive. BackupExec 2010 R3 with the following agents, Exchange, SQL, Active Directory, VMware. Dual Gigabit MPIO Connections between all devices (Storage Array, Backup Server, VM Hosts) What we would like to accomplish: I would like to implement an efficient Backup to Disk to Tape solution where all of our VMs are backed up to the Storage Array first, and then once completely backed up to the array are replicated to tape. In the event we needed to recover, we would be able to do so directly from tape. Where we are at currently. Of the several ways I have setup the jobs in Backup Exec 2010 R3 the backup jobs all queue up at the same time, as soon as a job is finished backing up to disk it then starts that same job to tape, but pulling from the original source instead of the designated B2D location. I understand that I could create a job that backs up the "Backup to Disk" folder to tape, but in the event of restoration, I would first need to stage the data in the B2D folder before I could restore the VM. I would really like to hear from individuals in similar situations. Any and all comments and critiques are appreciated.

    Read the article

  • cPanel web servers mounting home partition to a NAS or SAN

    - by Scott
    I currently have 2 cPanel web servers that are little 1RU dual cpu quad core xeons. They have a lot of resources for processing and handling web requests, and never exceed more than 10% cpu usage. They also have plenty of RAM. The problem is though that they both have RAID 1 160Gb SAS hard disk drives in them that are 75% full, and growing by the day. I didnt think that the amount of disk usage would be so high, but due to the nature of the sites hosted, this has become an issue. The easy fix would be just to upgrade the hard drives to something bigger (probably not of the SAS variety), but I am thinking of keeping the current machines as "processing servers" and buying a central "storage server" with about 12TB of storage. The /home/ partition on each of the 1RU servers would be mounted to a NAS or SAN point on this central storage server. My questions are: - Has anyone got a cPanel setup where they mount /home/ to a NAS or SAN elsewhere? If so, can you provide details as to what you did and how it went :) - Any recommendations on networking? Is gigabit ethernet enough? Is TCP/IP going to be a noticable performance problem? Anyone used a TOE key? - Anyone benchmarked or had any performance issues with SAN over NAS? Any help greatly appreciated. Scott

    Read the article

  • ESX 4.0 space: DASD, NAS, or ?

    - by thormj
    I put together an ESX box for better management, but its performance is a WTF item; I'm a noob at dealing with ESX, so I'm looking for a laundry-list of reading material to help me straighten this out so I can go back to .NET programming. Current storage system: We're running Raid5+Hotspare (8x500 GB spindles) on a PERC6i on a Dell 2910. Due to ESX limitatios, the PERC is showing the storage as 1x2TB + 1x800GB "partitions." I'm not sure of the setup's configuration (stride / stripe / ???) at all. Our Applications We have a SBS server as well as a minor (2x50 GB, but growing at 10GB/month) database server... Our application that lives on the database VM is CPU and I/O insense; it's a database churning excercise mixed in with a lot of computation on the data (fixing that performance is what I'm supposed to be working on)... Perfomance Issue When I do a backup, restore, or worse (copy a backup from 1 vm to another to move it to the QA VM), the entire system slows to a crawl (even "unrelated" VMs). I originally thought a DASD situation would be quite good since you had PCI-x bandwidth, but the systemwide slowdown is killing productivity. Questions What should I do to make an intelligent decision about NAS vs RAID vs SAN vs DASD? Are there sweet spots/ugly spots in the storage setup? Can you use a SSD PCI-X card in ESX for the tempdb? Good/Bad idea? Is there any way to "share" some image in a copy-on-write fashion? Most of the "Backup-Copy-Restore" is to "put a clean image on the dev boxes"; if I could have them "share" the master image, the "big copy" (2x50 GB) would only need to be done once per week instead of once per dev per week...[runtime performance isn't a concern with the dev boxes, but the backup/copy/restore kills production, SBS, and everything else on the box]

    Read the article

  • Is it possible/practical to install and run Linux on a USB flash drive?

    - by Graeme Donaldson
    I'm going to replace my old 2004 vintage desktop PC soon and I have an idea of what I want to do, I'm just not sure if it's possible or realistic. In the time since I built the old PC it has slowly become less used as a PC and more as a file server, so I figured I'd build a small file server which could also function as a router/DHCP/DNS/whatever box. The idea is to base it on an Atom system. I have my eye on the Intel D510MO for the moment. This supports 2 SATA disks, and I'd prefer to dedicate those to data storage. I'd like to install Ubuntu Server or maybe Debian on a 8/16GB USB flash drive. I have seen plenty of tutorials on how to perform an installation from a USB drive, but I can't seem to find any info on actually booting and running the OS from USB flash. Is this even possible? Is it practical? This box will mostly be used for: Making backups of mine and my wife's notebooks via LAN. Will use SMB or NFS for this. Digital media storage, which will be accessed by a Mede8er box with no storage of its own. I will most likely use NFS for this.

    Read the article

  • Windows XP Setup Fails to Recognize USB Floppy after formatting AHCI disk

    - by Strahn
    I am attempting to install Windows XP Professional x64 onto a HP EliteBook 8540w. I have downloaded both the latest Intel Rapid Storage Technology drivers and the Intel Storage Matrix drivers that are listed on HPs website and copied the drivers over to a floppy disk (two separate floppies, one for each version of the drivers.) Booting to my WinXP Pro x64 install CD, I go through the F6 process, load the driver and am able to see my HDD, delete, create and format partitions on it. When I go to continue the install, after checking the disk, the system asks me to enter the disk labeled "Intel Rapid Storage Technology" and press enter to continue. Nothing happens at this point when I press enter. This happens if I use the latest drivers or the older drivers. We have created a slipstreamed install CD using nLite that has the AHCI drivers integrated, which installs fine. However, we have identified a number of issues with the system that I believe are side-effects of using nLite for the slipstreaming and I am attempting to verify that. I have researched this issue and found a few examples of others having the same problem, but no solution. The USB floppy is a Lacie branded floppy, connecting it to a working XP workstation shows it to be the Y-E Data USB floppy drive that is supposedly 100% compatible with XP per MS KB 916196.

    Read the article

  • How do I set up Grub properly to quad-boot Windows, Mac OS X, Linux, and FreeBSD?

    - by Joe
    Grub has gone completely insane on me. My quad-boot system was working great up until I upgraded Ubuntu to 12.04. Since Ubuntu overwrote the Grub stuff I had to repair it with my Mac OS X and FreeBSD entries. After this, trying to boot Mac OS X gave me the error "couldn't open file" and FreeBSD gave the error "no such partition". Windows and Ubuntu worked fine. So I tried repairing again because I figured something must've gone wrong in the install process. Then only Ubuntu would boot. Trying to boot Windows would give me the error "no argument specified". I tried repairing Grub once again, since I seemed to be getting different results each time. This time, Ubuntu no longer appeared in the Grub menu, and the errors for the other OSes were the same. So I booted into the Ubuntu 12.04 live CD and ran Boot-Repair with recommended settings. Now Grub is completely skipped and Windows boots up. I have absolutely no idea what is going on or why I get different results every time I reinstall Grub. Here is how my partitions are set up: sda1 - Storage drive, sdb1 - Windows, sdb2 - Mac OS X, sdb3 - FreeBSD, sdb4 - Extended, sdb5 - Ubuntu, sdb6 - Shared storage, sdb7 - Shared Storage, Here's my grub.cfg file: grub.cfg

    Read the article

  • NAS for Mac OS X Server

    - by SamAdmin
    I'm using Mac OS X Server and want to allow the users that connect to their network accounts to store their data on a NAS drive. I want the users to connect to the Lion server as this allows for better policies and management for me and for their afp share to be located on a NAS drive. I've looked into home directories and network logins however I don't want the users to connect into a different login environment, just an authentication against their provided account on the Lion server and for their finder to take them to their own storage area - located on the NAS drive. Currently I am using FreeNAS for both authentication and storage however there are getting to be far too many people to manage each afp share and account, plus just using FreeNAS is extremely limiting for expansion and if something goes wrong with 1 entity the entire system goes down. Using the Lion server for user accounts and policies will be much better for this expanding business. I have looked into LDAP, using the Lion server as an LDAP server to authenticate against for FreeNAS however I have had issues with this and thought a different approach could be better from the other side of the situation... Providing the account with somewhere to store data rather than the afp share authenticating against an LDAP server. I am wrong to try it this way? Is it possible to logically add storage to a Mac OS X Server which can be recognised as a local drive, so can be used for network accounts?

    Read the article

  • Calling C# object method from IronPython

    - by Jason
    I'm trying to embed a scripting engine in my game. Since I'm writing it in C#, I figured IronPython would be a great fit, but the examples I've been able to find all focus on calling IronPython methods in C# instead of C# methods in IronPython scripts. To complicate things, I'm using Visual Studio 2010 RC1 on Windows 7 64 bit. IronRuby works like I expect it would, but I'm not very familiar with Ruby or Python syntax. What I'm doing: ScriptEngine engine = Python.CreateEngine(); ScriptScope scope = engine.CreateScope(); //Test class with a method that prints to the screen. scope.SetVariable("test", this); ScriptSource source = engine.CreateScriptSourceFromString("test.SayHello()", Microsoft.Scripting.SourceCodeKind.Statements); source.Execute(scope); This generates an error, "'TestClass' object has no attribute 'SayHello'" This exact set up works fine with IronRuby though using "self.test.SayHello()" I'm wary using IronRuby though because it doesn't appear as mature as IronPython. If it's close enough, I might go with that though. Any ideas? I know this has to be something simple.

    Read the article

  • Need help on HL7

    - by rohit
    Hi All, I need all your help in guiding me with working on HL7 Interface Integration which I am to work on between two disperate clinical applications. Its something like this ,let me explain my query with an example. "We have Epic system that places orders(lab,medications..etc) presently.Now,next these lab orders are to result in another Cerner application. For this,there has to be a INTERFACE ENGINE which has to read the HL7 messages coming from the EPIC system and translate them to proper messages for the Cerner SYSTEM and then write into their database. So,could you please explain me with an example interface engine which reads the HL-7 messages first and translates them to Cerner application format. How will i implement a Interface Engine here which would read the EPIC data? What steps are involved? An example would be best. Mainly,orders are first placed in EPIC and is to resulted in Cerner applications. **PLEASE PLEASE help me with understanding the process,and how to do interface intregation with an Interface Engine? Thanks Rohit

    Read the article

  • How to show an image on jasper report?

    - by spderosso
    Hi, I want to show an image on a jasper report. I have the following on the .jrxml:  The image logo.jpg is in the same directory as the .jrxml. By just putting that it didn't work for me. I googled a bit and found out that jasper report considers what i put on the .jrxml as a relative path to the JVM directory and that to change this I need to pass as a "REPORT_FILE_RESOLVER" parameter a FileResolver that returns the file. So, I did the following in my .java (is located in same place as the .jrxml and the image) FileResolver fileResolver = new FileResolver() { @Override public File resolveFile(String fileName) { return new File(fileName); } }; HashMap<String, Object> parameters = new HashMap<String, Object>(); parameters.put("REPORT_FILE_RESOLVER", fileResolver); ... Which should return the expected file, but I still get a net.sf.jasperreports.engine.JRException: Error loading byte data : logo.jpg at net.sf.jasperreports.engine.util.JRLoader.loadBytes(JRLoader.java:301) at net.sf.jasperreports.engine.util.JRLoader.loadBytesFromLocation(JRLoader.java:479) at net.sf.jasperreports.engine.JRImageRenderer.getInstance(JRImageRenderer.java:180) ... What am I doing wrong? Thanks!

    Read the article

  • how to export bind and keyframe bone poses from blender to use in OpenGL

    - by SaldaVonSchwartz
    EDIT: I decided to reformulate the question in much simpler terms to see if someone can give me a hand with this. Basically, I'm exporting meshes, skeletons and actions from blender into an engine of sorts that I'm working on. But I'm getting the animations wrong. I can tell the basic motion paths are being followed but there's always an axis of translation or rotation which is wrong. I think the problem is most likely not in my engine code (OpenGL-based) but rather in either my misunderstanding of some part of the theory behind skeletal animation / skinning or the way I am exporting the appropriate joint matrices from blender in my exporter script. I'll explain the theory, the engine animation system and my blender export script, hoping someone might catch the error in either or all of these. The theory: (I'm using column-major ordering since that's what I use in the engine cause it's OpenGL-based) Assume I have a mesh made up of a single vertex v, along with a transformation matrix M which takes the vertex v from the mesh's local space to world space. That is, if I was to render the mesh without a skeleton, the final position would be gl_Position = ProjectionMatrix * M * v. Now assume I have a skeleton with a single joint j in bind / rest pose. j is actually another matrix. A transform from j's local space to its parent space which I'll denote Bj. if j was part of a joint hierarchy in the skeleton, Bj would take from j space to j-1 space (that is to its parent space). However, in this example j is the only joint, so Bj takes from j space to world space, like M does for v. Now further assume I have a a set of frames, each with a second transform Cj, which works the same as Bj only that for a different, arbitrary spatial configuration of join j. Cj still takes vertices from j space to world space but j is rotated and/or translated and/or scaled. Given the above, in order to skin vertex v at keyframe n. I need to: take v from world space to joint j space modify j (while v stays fixed in j space and is thus taken along in the transformation) take v back from the modified j space to world space So the mathematical implementation of the above would be: v' = Cj * Bj^-1 * v. Actually, I have one doubt here.. I said the mesh to which v belongs has a transform M which takes from model space to world space. And I've also read in a couple textbooks that it needs to be transformed from model space to joint space. But I also said in 1 that v needs to be transformed from world to joint space. So basically I'm not sure if I need to do v' = Cj * Bj^-1 * v or v' = Cj * Bj^-1 * M * v. Right now my implementation multiples v' by M and not v. But I've tried changing this and it just screws things up in a different way cause there's something else wrong. Finally, If we wanted to skin a vertex to a joint j1 which in turn is a child of a joint j0, Bj1 would be Bj0 * Bj1 and Cj1 would be Cj0 * Cj1. But Since skinning is defined as v' = Cj * Bj^-1 * v , Bj1^-1 would be the reverse concatenation of the inverses making up the original product. That is, v' = Cj0 * Cj1 * Bj1^-1 * Bj0^-1 * v Now on to the implementation (Blender side): Assume the following mesh made up of 1 cube, whose vertices are bound to a single joint in a single-joint skeleton: Assume also there's a 60-frame, 3-keyframe animation at 60 fps. The animation essentially is: keyframe 0: the joint is in bind / rest pose (the way you see it in the image). keyframe 30: the joint translates up (+z in blender) some amount and at the same time rotates pi/4 rad clockwise. keyframe 59: the joint goes back to the same configuration it was in keyframe 0. My first source of confusion on the blender side is its coordinate system (as opposed to OpenGL's default) and the different matrices accessible through the python api. Right now, this is what my export script does about translating blender's coordinate system to OpenGL's standard system: # World transform: Blender -> OpenGL worldTransform = Matrix().Identity(4) worldTransform *= Matrix.Scale(-1, 4, (0,0,1)) worldTransform *= Matrix.Rotation(radians(90), 4, "X") # Mesh (local) transform matrix file.write('Mesh Transform:\n') localTransform = mesh.matrix_local.copy() localTransform = worldTransform * localTransform for col in localTransform.col: file.write('{:9f} {:9f} {:9f} {:9f}\n'.format(col[0], col[1], col[2], col[3])) file.write('\n') So if you will, my "world" matrix is basically the act of changing blenders coordinate system to the default GL one with +y up, +x right and -z into the viewing volume. Then I also premultiply (in the sense that it's done by the time we reach the engine, not in the sense of post or pre in terms of matrix multiplication order) the mesh matrix M so that I don't need to multiply it again once per draw call in the engine. About the possible matrices to extract from Blender joints (bones in Blender parlance), I'm doing the following: For joint bind poses: def DFSJointTraversal(file, skeleton, jointList): for joint in jointList: bindPoseJoint = skeleton.data.bones[joint.name] bindPoseTransform = bindPoseJoint.matrix_local.inverted() file.write('Joint ' + joint.name + ' Transform {\n') translationV = bindPoseTransform.to_translation() rotationQ = bindPoseTransform.to_3x3().to_quaternion() scaleV = bindPoseTransform.to_scale() file.write('T {:9f} {:9f} {:9f}\n'.format(translationV[0], translationV[1], translationV[2])) file.write('Q {:9f} {:9f} {:9f} {:9f}\n'.format(rotationQ[1], rotationQ[2], rotationQ[3], rotationQ[0])) file.write('S {:9f} {:9f} {:9f}\n'.format(scaleV[0], scaleV[1], scaleV[2])) DFSJointTraversal(file, skeleton, joint.children) file.write('}\n') Note that I'm actually grabbing the inverse of what I think is the bind pose transform Bj. This is so I don't need to invert it in the engine. Also note I went for matrix_local, assuming this is Bj. The other option is plain "matrix", which as far as I can tell is the same only that not homogeneous. For joint current / keyframe poses: for kfIndex in keyframes: bpy.context.scene.frame_set(kfIndex) file.write('keyframe: {:d}\n'.format(int(kfIndex))) for i in range(0, len(skeleton.data.bones)): file.write('joint: {:d}\n'.format(i)) currentPoseJoint = skeleton.pose.bones[i] currentPoseTransform = currentPoseJoint.matrix translationV = currentPoseTransform.to_translation() rotationQ = currentPoseTransform.to_3x3().to_quaternion() scaleV = currentPoseTransform.to_scale() file.write('T {:9f} {:9f} {:9f}\n'.format(translationV[0], translationV[1], translationV[2])) file.write('Q {:9f} {:9f} {:9f} {:9f}\n'.format(rotationQ[1], rotationQ[2], rotationQ[3], rotationQ[0])) file.write('S {:9f} {:9f} {:9f}\n'.format(scaleV[0], scaleV[1], scaleV[2])) file.write('\n') Note that here I go for skeleton.pose.bones instead of data.bones and that I have a choice of 3 matrices: matrix, matrix_basis and matrix_channel. From the descriptions in the python API docs I'm not super clear which one I should choose, though I think it's the plain matrix. Also note I do not invert the matrix in this case. The implementation (Engine / OpenGL side): My animation subsystem does the following on each update (I'm omitting parts of the update loop where it's figured out which objects need update and time is hardcoded here for simplicity): static double time = 0; time = fmod((time + elapsedTime),1.); uint16_t LERPKeyframeNumber = 60 * time; uint16_t lkeyframeNumber = 0; uint16_t lkeyframeIndex = 0; uint16_t rkeyframeNumber = 0; uint16_t rkeyframeIndex = 0; for (int i = 0; i < aClip.keyframesCount; i++) { uint16_t keyframeNumber = aClip.keyframes[i].number; if (keyframeNumber <= LERPKeyframeNumber) { lkeyframeIndex = i; lkeyframeNumber = keyframeNumber; } else { rkeyframeIndex = i; rkeyframeNumber = keyframeNumber; break; } } double lTime = lkeyframeNumber / 60.; double rTime = rkeyframeNumber / 60.; double blendFactor = (time - lTime) / (rTime - lTime); GLKMatrix4 bindPosePalette[aSkeleton.jointsCount]; GLKMatrix4 currentPosePalette[aSkeleton.jointsCount]; for (int i = 0; i < aSkeleton.jointsCount; i++) { F3DETQSType& lPose = aClip.keyframes[lkeyframeIndex].skeletonPose.joints[i]; F3DETQSType& rPose = aClip.keyframes[rkeyframeIndex].skeletonPose.joints[i]; GLKVector3 LERPTranslation = GLKVector3Lerp(lPose.t, rPose.t, blendFactor); GLKQuaternion SLERPRotation = GLKQuaternionSlerp(lPose.q, rPose.q, blendFactor); GLKVector3 LERPScaling = GLKVector3Lerp(lPose.s, rPose.s, blendFactor); GLKMatrix4 currentTransform = GLKMatrix4MakeWithQuaternion(SLERPRotation); currentTransform = GLKMatrix4TranslateWithVector3(currentTransform, LERPTranslation); currentTransform = GLKMatrix4ScaleWithVector3(currentTransform, LERPScaling); GLKMatrix4 inverseBindTransform = GLKMatrix4MakeWithQuaternion(aSkeleton.joints[i].inverseBindTransform.q); inverseBindTransform = GLKMatrix4TranslateWithVector3(inverseBindTransform, aSkeleton.joints[i].inverseBindTransform.t); inverseBindTransform = GLKMatrix4ScaleWithVector3(inverseBindTransform, aSkeleton.joints[i].inverseBindTransform.s); if (aSkeleton.joints[i].parentIndex == -1) { bindPosePalette[i] = inverseBindTransform; currentPosePalette[i] = currentTransform; } else { bindPosePalette[i] = GLKMatrix4Multiply(inverseBindTransform, bindPosePalette[aSkeleton.joints[i].parentIndex]); currentPosePalette[i] = GLKMatrix4Multiply(currentPosePalette[aSkeleton.joints[i].parentIndex], currentTransform); } aSkeleton.skinningPalette[i] = GLKMatrix4Multiply(currentPosePalette[i], bindPosePalette[i]); } Finally, this is my vertex shader: #version 100 uniform mat4 modelMatrix; uniform mat3 normalMatrix; uniform mat4 projectionMatrix; uniform mat4 skinningPalette[6]; uniform lowp float skinningEnabled; attribute vec4 position; attribute vec3 normal; attribute vec2 tCoordinates; attribute vec4 jointsWeights; attribute vec4 jointsIndices; varying highp vec2 tCoordinatesVarying; varying highp float lIntensity; void main() { tCoordinatesVarying = tCoordinates; vec4 skinnedVertexPosition = vec4(0.); for (int i = 0; i < 4; i++) { skinnedVertexPosition += jointsWeights[i] * skinningPalette[int(jointsIndices[i])] * position; } vec4 skinnedNormal = vec4(0.); for (int i = 0; i < 4; i++) { skinnedNormal += jointsWeights[i] * skinningPalette[int(jointsIndices[i])] * vec4(normal, 0.); } vec4 finalPosition = mix(position, skinnedVertexPosition, skinningEnabled); vec4 finalNormal = mix(vec4(normal, 0.), skinnedNormal, skinningEnabled); vec3 eyeNormal = normalize(normalMatrix * finalNormal.xyz); vec3 lightPosition = vec3(0., 0., 2.); lIntensity = max(0.0, dot(eyeNormal, normalize(lightPosition))); gl_Position = projectionMatrix * modelMatrix * finalPosition; } The result is that the animation displays wrong in terms of orientation. That is, instead of bobbing up and down it bobs in and out (along what I think is the Z axis according to my transform in the export clip). And the rotation angle is counterclockwise instead of clockwise. If I try with a more than one joint, then it's almost as if the second joint rotates in it's own different coordinate space and does not follow 100% its parent's transform. Which I assume it should from my animation subsystem which I assume in turn follows the theory I explained for the case of more than one joint. Any thoughts?

    Read the article

  • From VB6 to .net via COM and Remoting...What a mess!

    - by Robert
    I have some legacy vb6 applications that need to talk to my .Net engine application. The engine provides an interface that can be connected to via .net Remoting. Now I have a stub class library that wraps all of the types that the interface exposes. The purpose of this stub is to translate my .net types into COM-friendly types. When I run this class library as a console application, it is able to connect to the engine, call various methods, and successfully return the wrapped types. The next step in the chain is to allow my VB6 application to call this COM enabled stub. This works fine for my main engine-entry type (IModelFetcher which is wrapped as COM_ModelFetcher). However, when I try and get any of the model fetcher's model types (IClientModel, wrapped as COM_IClientModel, IUserModel, wrapped as COM_IUserModel, e.t.c.), I get the following exception: [Exception - type: System.InvalidCastException 'Return argument has an invalid type.'] in mscorlib at System.Runtime.Remoting.Proxies.RealProxy.ValidateReturnArg(Object arg, Type paramType) at System.Runtime.Remoting.Proxies.RealProxy.PropagateOutParameters(IMessage msg, Object[] outArgs, Object returnValue) at System.Runtime.Remoting.Proxies.RealProxy.HandleReturnMessage(IMessage reqMsg, IMessage retMsg) at System.Runtime.Remoting.Proxies.RealProxy.PrivateInvoke(MessageData& msgData, Int32 type) at AWT.Common.AWTEngineInterface.IModelFetcher.get_ClientModel() at AWT.Common.AWTEngineCOMInterface.COM_ModelFetcher.GetClientModel() The first thing I did when I saw this was to handle the 'AppDomain.CurrentDomain.AssemblyResolve' event, and this allowed me to load the required assemblies. However, I'm still getting this exception now. My AssemblyResolve event handler is loading three assemblies correctly, and I can confirm that it does not get called prior to this exception. Can someone help me untie myself from this mess of interprocess communication?!

    Read the article

  • Assigning a property across threads

    - by Mike
    I have set a property across threads before and I found this post http://stackoverflow.com/questions/142003/cross-thread-operation-not-valid-control-accessed-from-a-thread-other-than-the-t about getting a property. I think my issue with the code below is setting the variable to the collection is an object therefore on the heap and therefore is just creating a pointer to the same object So my question is besides creating a deep copy, or copying the collection into a different List object is there a better way to do the following to aviod the error during the for loop. Cross-thread operation not valid: Control 'lstProcessFiles' accessed from a thread other than the thread it was created on. Code: private void btnRunProcess_Click(object sender, EventArgs e) { richTextBox1.Clear(); BackgroundWorker bg = new BackgroundWorker(); bg.DoWork += new DoWorkEventHandler(bg_DoWork); bg.RunWorkerCompleted += new RunWorkerCompletedEventHandler(bg_RunWorkerCompleted); bg.RunWorkerAsync(lstProcessFiles.SelectedItems); } void bg_DoWork(object sender, DoWorkEventArgs e) { WorkflowEngine engine = new WorkflowEngine(); ListBox.SelectedObjectCollection selectedCollection=null; if (lstProcessFiles.InvokeRequired) { // Try #1 selectedCollection = (ListBox.SelectedObjectCollection) this.Invoke(new GetSelectedItemsDelegate(GetSelectedItems), new object[] { lstProcessFiles }); // Try #2 //lstProcessFiles.Invoke( // new MethodInvoker(delegate { // selectedCollection = lstProcessFiles.SelectedItems; })); } else { selectedCollection = lstProcessFiles.SelectedItems; } // *********Same Error on this line******************** // Cross-thread operation not valid: Control 'lstProcessFiles' accessed // from a thread other than the thread it was created on. foreach (string l in selectedCollection) { if (engine.LoadProcessDocument(String.Format(@"C:\TestDirectory\{0}", l))) { try { engine.Run(); WriteStep(String.Format("Ran {0} Succussfully", l)); } catch { WriteStep(String.Format("{0} Failed", l)); } engine.PrintProcess(); WriteStep(String.Format("Rrinted {0} to debug", l)); } } } private delegate void WriteDelegate(string p); private delegate ListBox.SelectedObjectCollection GetSelectedItemsDelegate(ListBox list); private ListBox.SelectedObjectCollection GetSelectedItems(ListBox list) { return list.SelectedItems; }

    Read the article

  • Importing fixtures with foreign keys and SQLAlchemy?

    - by Chris Reid
    I've been experimenting with using fixture to load test data sets into my Pylons / PostgreSQL app. This works great except that it fails to properly create foreign keys if they reference an auto increment id field. My fixture looks like this: class AuthorsData(DataSet): class frank_herbert: first_name = "Frank" last_name = "Herbert" class BooksData(DataSet): class dune: title = "Dune" author_id = AuthorsData.frank_herbert.ref('id') And the model: t_authors = sa.Table("authors", meta.metadata, sa.Column("id", sa.types.Integer, primary_key=True), sa.Column("first_name", sa.types.String(100)), sa.Column("last_name", sa.types.String(100)), ) t_books = sa.Table("books", meta.metadata, sa.Column("id", sa.types.Integer, primary_key=True), sa.Column("title", sa.types.String(100)), sa.Column("author_id", sa.types.Integer, sa.ForeignKey('authors.id')) ) When running "paster setup-app development.ini", SQLAlchemey reports the FK value as "None" so it's obviously not finding it: 15:59:48,683 INFO [sqlalchemy.engine.base.Engine.0x...9eb0] INSERT INTO books (title, author_id) VALUES (%(title)s, %(author_id)s) RETURNING books.id 15:59:48,683 INFO [sqlalchemy.engine.base.Engine.0x...9eb0] {'author_id': None, 'title': 'Dune'} The fixture docs actually warn that this might be a problem: "However, in some cases you may need to reference an attribute that does not have a value until it is loaded, like a serial ID column. (Note that this is not supported by the SQLAlchemy data layer when using sessions.)" http://farmdev.com/projects/fixture/using-dataset.html#referencing-foreign-dataset-classes Does this mean that this is just not supported with SQLAlchemy? Or is it possible to load the data without using SA "sessions"? How are other people handling this issue?

    Read the article

  • SQL Server Index cost

    - by yellowstar
    I have read that one of the tradeoffs for adding table indexes in SQL Server is the increased cost of insert/update/delete queries to benefit the performance of select queries. I can conceptually understand what happens in the case of an insert because SQL Server has to write entries into each index matching the new rows, but update and delete are a little more murky to me because I can't quite wrap my head around what the database engine has to do. Let's take DELETE as an example and assume I have the following schema (pardon the pseudo-SQL) TABLE Foo col1 int ,col2 int ,col3 int ,col4 int PRIMARY KEY (col1,col2) INDEX IX_1 col3 INCLUDE col4 Now, if I issue the statement DELETE FROM Foo WHERE col1=12 AND col2 > 34 I understand what the engine must do to update the table (or clustered index if you prefer). The index is set up to make it easy to find the range of rows to be removed and do so. However, at this point it also needs to update IX_1 and the query that I gave it gives no obvious efficient way for the database engine to find the rows to update. Is it forced to do a full index scan at this point? Does the engine read the rows from the clustered index first and generate a smarter internal delete against the index? It might help me to wrap my head around this if I understood better what is going on under the hood, but I guess my real question is this. I have a database that is spending a significant amount of time in delete and I'm trying to figure out what I can do about it. When I display the execution plan for the deletion, it just shows an entry for "Clustered Index Delete" on table Foo which lists in the details section the other indices that need to be updated but I don't get any indication of the relative cost of these other indices. Are they all equal in this case? Is there some way that I can estimate the impact of removing one or more of these indices without having to actually try it?

    Read the article

  • Windows Workflow Foundation: Recommendations how to design architecture

    - by Petr Felzmann
    We are running several the same ASP.NET applications (one per customer) based on our custom framework (libraries). Each application use its own database (Initial Catalog in the term of connection string). Now we would like to add workflow capability (of course 4.0 ;) to the applications. So the particular workflows will be the same for all the applications only some initial settings of each workflow can vary, e.g. in one application the e-mail will be send to the user X, but in other application to the user Y. I have several general questions how to design architecture: (1) Can be the workflow database shared for all the applications? (2) Where to host workflow engine - inside our custom windows NT service or inside IIS? What are the criteria to choose the right host? (3) How the workflow engine should communicate with applications? Should application call some WCF endpoint API configured in workflow host or vice verse - should each application provide WCF endpoint API and workflow engine will call it? How then the workflow engine will identify applications? Both cases requires probably some application identifier as a parameter in API calls? (4) We would like to also store some information to the application databases based on the workflow states. Is it possible? Thanks for suggestions!

    Read the article

< Previous Page | 217 218 219 220 221 222 223 224 225 226 227 228  | Next Page >