Search Results

Search found 6716 results on 269 pages for 'distributed algorithm'.

Page 227/269 | < Previous Page | 223 224 225 226 227 228 229 230 231 232 233 234  | Next Page >

  • Take a regular Windows 7 clone with clonezilla (device-to-image)

    - by Mario De Schaepmeester
    I am unexperienced with cloning software and I've decided to use Clonezilla as it seemed best as freeware. I chose device image and left most options standard. I chose expert mode anyway to see what I could configure, and decided to try the lzop algorithm instead of the default one for compression. The rest was left at default. When Clonezilla asked me which partitions to clone (I chose parts to image), I chose the C:\ drive but Windows 7 also creates a 100MB partition on setup for system files (the actual boot partition?). I copied that into the image as well. The reason I didn't choose disk to image is that I also have a data partition that needs to stay intact. Now I'm simply not sure that this is the way to go, should I ever need to restore my disk image. Will Clonezilla know what to do with both partitions and will Windows 7 work perfectly after restoring? Edit: apparantly a similar question has been asked before. The link to the first article in the answer is not relevant to me since it covers a device-to-device clone. It appears the windows installation disk can repair the 100MB partition. As for Clonezilla, it copies "hidden data after the MBR" by default too. I don't know, I feel I'll be allright whether by restoring the partition with Clonezilla or repairing it with the Windows 7 disk.

    Read the article

  • How to make Thunderbird play nice with Google mail

    - by Christi
    Thunderbird and gmail aren't exactly the best of friends. Gmail's tags mean that Thunderbird often downloads multiple copies of a single mail. Anything tagged in gmail will appear in a folder related to that tag, the "all mail" folder, and possibly the "inbox" and "sent mail" folders too. Thus a mail with multiple tags could potentially be stored more than four times in a local Thunderbird cache. This can make searching difficult, and is obviously wasteful of disk space. The best solution I have come up with is as follows. Operate a zero inbox policy (i.e. use the inbox for processing live mail only and archive everything else) which eliminates an extra copy in the inbox. Secondly, configure Thunderbird not to sync the "Sent Mail" folder - this is a bit of a pain, since I actually find it quite useful to be able to look through just the mails I've sent, but a search can duplicate this functionality. In this way, most of the duplicates are removed, and only mail with tags is stored locally more than once. Ideally, however, I'd only like one copy of each mail to be stored locally. I am surprised Thunderbird doesn't store mail by some sort of hashing algorithm to prevent precisely this problem - but it wouldn't be compatible with the way the folders are mirrored in a local directory structure, I suppose. Can anyone think of a better way to get Thunderbird to cache a Google mail account locally efficiently.

    Read the article

  • SSH with public/private key to iMac fails.

    - by bennedich
    I'm trying to connect to my iMac (server) from my macbook (client) on my LAN. Both have Mac OS X 10.6.4. Server running on a new clean install of the OS. When just activating Remote Login in System Preferences everything works fine. But when setting up ssh to only work with public/private key I get the following error messages from the server log depending on if I use a rsa passphrase or not: With passphrase (case 1): PAM: user account has expired for <myServerUserName> from 192.168.X.X via 192.168.X.Y Without passphrase (case 2): Failed publickey for <myServerUserName> from 192.168.X.X port AAAAA ssh2 This is my setup algorithm: Create a private and public key on client with command ssh-keygen -t rsa. In case 1 I also set a passphrase. Move the id_rsa.pub to the server path /Users/<myServerUserName>/.ssh/ In this folder I execute cat id_rsa.pub > authorized_keys Making sure Remote Login isn't active, I now execute sudo /usr/sbin/sshd -d on the server. Back on the client I now type ssh -v -v -v <myServerUserName>@192.168.X.Y and get prompted to accept RSA key fingerprint. This is NOT the same fingerprint as the one from when I created the private/public key (should it be?). I accept. Depending on case: CASE 1: Client gets halted for password and the response is permission denied even though correct password is given. Back on the server I can read the error message I stated above for case 1: PAM: user account has expired... CASE 2: Client gets message Connection closed by 192.168.X.Y. Back on the server I can read the error message I stated above for case 2: Failed publickey... What could possibly cause this?

    Read the article

  • mdadm lvm and ext4 slowness - How can I speed it up?

    - by beatbreaker
    I can't figure out why I'm getting such terrible times out of my mdadm and in particular the lvm partitions in it. I made the raid: mdadm --create --verbose /dev/md0 --level=5 --chunk=1024 --raid-devices=4 /dev/sda1 /dev/sdb1 /dev/sdc1 /dev/sdd1 # cat /proc/mdstat Personalities : [raid6] [raid5] [raid4] md0 : active raid5 sda1[0] sdd1[3] sdc1[2] sdb1[1] 2930279424 blocks level 5, 1024k chunk, algorithm 2 [4/4] [UUUU] I then created the physical volume, volume group, and logical volumes, I then formatted the logical volumes to ext4 using the following commands I got from here: http://busybox.net/~aldot/mkfs_stride.html mkfs.ext3 -b 4096 -E stride=256,stripe-width=768 /dev/datavg/blah Now I'm confused, I had these lvs running real quick before in mdadm but now that I've 'optimized' everything it's slower, eg, before: /dev/datavg/lv_audio: Timing buffered disk reads: 598 MB in 3.01 seconds = 198.85 MB/sec but now after: /dev/datavg/audio: Timing buffered disk reads: 198 MB in 3.00 seconds = 65.96 MB/sec That's pitiful! What's happened here? Did I not follow the instructions correctly? Can i reshape the ext4 partitons to default back to what they were? (I used defaults before and they were fine!)

    Read the article

  • How to programmatically add x509 certificate to local machine store using c#

    - by David
    I understand the question title may be a duplicate but I have not found an answer for my situation yet so here goes; I have this simple peice of code // Convert the Filename to an X509 Certificate X509Certificate2 cert = new X509Certificate2(certificateFilePath); // Get the server certificate store X509Store store = new X509Store(StoreName.TrustedPeople, StoreLocation.LocalMachine); store.Open(OpenFlags.MaxAllowed); store.Add(cert); // x509 certificate created from a user supplied filename But keep being presented with an "Access Denied" exception. I have read some information that suggests using StorePermissions would solve my issue but I don't think this is relevant in my code. Having said that, I did test it to to be sure and I couldn't get it to work. I also found suggestions that changing folder permissions within Windows was the way to go and while this may work(not tested), it doesn't seem practical for what will become distributed code. I also have to add that as the code will be running as a service on a server, adding the certificates to the current user store also seems wrong. Is there anyway to programmatically add a certificate into the local machine store?

    Read the article

  • Exception "The operation is not valid for the state of the transaction" using TransactionScope

    - by Lanfear
    We have a web service on server #1 and a database on server #2. Web service uses transaction scope to produce distributed transaction. Everything is correct. And we have another database on server #3. We had some problems with this server and we reinstalled operation system and software. We configured MSDTC and tried to use web service from server #1 to communicate with database on this server. And now after first select statement within transaction scope we get: "The operation is not valid for the state of the transaction". This exception falls in every web service request if it is using transaction scope. Server #2 and Server #3 is almost similar. The difference can be only in settings. .NET framework 3.5 SP1 installed and SQL Server SP3 on all servers. Full stacktrace: System.Transactions.TransactionState.EnlistPromotableSinglePhase(InternalTransaction tx, IPromotableSinglePhaseNotification promotableSinglePhaseNotification, Transaction atomicTransaction) ? System.Transactions.Transaction.EnlistPromotableSinglePhase(IPromotableSinglePhaseNotification promotableSinglePhaseNotification) ? System.Data.SqlClient.SqlInternalConnection.EnlistNonNull(Transaction t ? System.Data.SqlClient.SqlInternalConnection.Enlist(Transaction t ? System.Data.SqlClient.SqlInternalConnectionTds.Activate(Transaction transaction) ? System.Data.ProviderBase.DbConnectionInternal.ActivateConnection(Transaction transaction) ? System.Data.ProviderBase.DbConnectionPool.GetConnection(DbConnection owningObject) ? System.Data.ProviderBase.DbConnectionFactory.GetConnection(DbConnection owningConnection) ? System.Data.ProviderBase.DbConnectionClosed.OpenConnection(DbConnection outerConnection, DbConnectionFactory connectionFactory) ? System.Data.SqlClient.SqlConnection.Open() ? NHibernate.Connection.DriverConnectionProvider.GetConnection() ? NHibernate.Impl.SessionFactoryImpl.OpenConnection() I searched this message but didn't found any appropriate solution. So what settings should I check and what exactly should I do to fix it?

    Read the article

  • iPhone: Software Development And Distribution

    - by xsl
    I have a few quick questions about the iPhone software development. I did some research about the topic, but there are a few specific things I would like to ask here, because I will have to estimate the cost of the required hardware and software, before I am allowed to buy anything. I never did any Mac development nor have I ever owned an iPhone, so needless to say this is quite hard for me. I will buy an iMac mini with 2 GB RAM for iPhone development. I will have to use it at the same time as my regular PC, but the majority of the time I won't use the Mac at all. Do I have to buy an additional monitor, a mouse and a keyboard or is there a better solution? I will have to port a C library to the iPhone platform and develop an iPhone application that uses the ported library. Do I need anything else than the iPhone SDK to do this? If I use an external library (see above), can I test the application with the integrated emulator, or is it recommend to buy the device? In a later phase of the project I will have to buy an iPhone, but I will have to wait until the iPhone 4 is released here in Europe, because the application requires a camera. In addition to this I will have to send data to a remote webservice. Aside from these two things I don't require any other features. Can I just buy the iPhone online from another country (the iPhones here are sim locked), or should I buy one with a contract? When the application is ready, it will be installed on a few iPhones owned by our customer. Because of security reasons it is crucial that there is no third party involved in this process (i.e. the application should not be distributed on the app store). Is this possible?

    Read the article

  • Notification Email Best Practices--From Server Setup to Programming

    - by Andrew Wagner
    All, I'm in the process now of building a SaaS tool that allows network admins to generate notification emails to the members of the end-users of our platform (among many many other things). I'm running into a bit of an "out of my expertise" wall, as I know there are a lot of variables involved with configuring an application that can: Run in a distributed way via load balancing and still-- Leverage a single mail server for sending notification emails Process unsubscribe requests Avoid any ISP blacklisting in the process. If anyone has the time and has done this before, I'd love if you could walk me through the A-Z of best practices both from a configuration perspective and an execution perspective for generating these emails (anything from necessary DNS settings to ideal SMTP setup and configuration) Currently, our application generates email via Google Apps using the PHPMailer class. While this works well, it doesn't queue messages (potential for timeout problems if any of our clients amass a very large list of end-users), and Google limits the amount of allowed generated email messages to 500/day. I know this is a lofty question, but any guidance you could provide would be smashing and a big help as we work through this hurtle in our beta development stage. Thanks!

    Read the article

  • Binary serialization/de-serialization in C++ and C#

    - by 6pack kid
    Hello. I am working on a distributed application which has two components. One is written in standard C++ (not managed C++) and the other one is written in C#. Both are communicating via a message bus. I have a situation in which I need to pass objects from C++ to C# application and for this I need to serialize those objects in C++ and de-serialize them in C# (something like marshaling/un-marshaling in .NET). I need to perform this serialization in binary and not in XML (due to performance reasons). I have used Boost.Serialization to do this when both ends were implemented in C++ but now that I have a .NET application on one end, Boost.Serialization is not a viable solution. I am looking for a solution that allows me to perform (de)serialization across C++ and .NET boundary i.e., cross platform binary serialization. I know I can implement the (de)serialization code in a C++ dll and use P/Invoke in the .NET application, but I want to keep that as a last resort. Also, I want to know if I use some standard like gzip, will that be efficient? Are there any other alternatives to gzip? What are the pros/cons of them? Thanks

    Read the article

  • How do I install LFE on Ubuntu Karmic?

    - by karlthorwald
    Erlang was already installed: $dpkg -l|grep erlang ii erlang 1:13.b.3-dfsg-2ubuntu2 Concurrent, real-time, distributed function ii erlang-appmon 1:13.b.3-dfsg-2ubuntu2 Erlang/OTP application monitor ii erlang-asn1 1:13.b.3-dfsg-2ubuntu2 Erlang/OTP modules for ASN.1 support ii erlang-base 1:13.b.3-dfsg-2ubuntu2 Erlang/OTP virtual machine and base applica ii erlang-common-test 1:13.b.3-dfsg-2ubuntu2 Erlang/OTP application for automated testin ii erlang-debugger 1:13.b.3-dfsg-2ubuntu2 Erlang/OTP application for debugging and te ii erlang-dev 1:13.b.3-dfsg-2ubuntu2 Erlang/OTP development libraries and header [... many more] Erlang seems to work: $ erl Erlang R13B03 (erts-5.7.4) [source] [64-bit] [smp:2:2] [rq:2] [async-threads:0] [hipe] [kernel-poll:false] Eshell V5.7.4 (abort with ^G) 1> I downloaded lfe from github and checked out 0.5.2: git clone http://github.com/rvirding/lfe.git cd lfe git checkout -b local0.5.2 e207eb2cad $ configure configure: command not found $ make mkdir -p ebin erlc -I include -o ebin -W0 -Ddebug +debug_info src/*.erl #erl -I -pa ebin -noshell -eval -noshell -run edoc file src/leex.erl -run init stop #erl -I -pa ebin -noshell -eval -noshell -run edoc_run application "'Leex'" '"."' '[no_packages]' #mv src/*.html doc/ Must be something stupid i missed :o $ sudo make install make: *** No rule to make target `install'. Stop. $ erl -noshell -noinput -s lfe_boot start {"init terminating in do_boot",{undef,[{lfe_boot,start,[]},{init,start_it,1},{init,start_em,1}]}} Crash dump was written to: erl_crash.dump init terminating in do_boot () Is there an example how I would create a hello world source file and compile and run it?

    Read the article

  • code review: Is it subjective or objective(quantifiable) ?

    - by Ram
    I am putting together some guidelines for code reviews. We do not have one formal process yet, and trying to formalize it. And our team is geographically distributed We are using TFS for source control (used it for tasks/bug tracking/project management as well, but migrated that to JIRA) with VS2008 for development. What are the things you look for when doing a code review ? These are the things I came up with Enforce FXCop rules (we are a Microsoft shop) Check for performance (any tools ?) and security (thinking about using OWASP- code crawler) and thread safety Adhere to naming conventions The code should cover edge cases and boundaries conditions Should handle exceptions correctly (do not swallow exceptions) Check if the functionality is duplicated elsewhere method body should be small(20-30 lines) , and methods should do one thing and one thing only (no side effects/ avoid temporal coupling -) Do not pass/return nulls in methods Avoid dead code Document public and protected methods/properties/variables What other things do you generally look for ? I am trying to see if we can quantify the review process (it would produce identical output when reviewed by different persons) Example: Saying "the method body should be no longer than 20-30 lines of code" as opposed to saying "the method body should be small" Or is code review very subjective ( and would differ from one reviewer to another ) ? The objective is to have a marking system (say -1 point for each FXCop rule violation,-2 points for not following naming conventions,2 point for refactoring etc) so that developers would be more careful when they check in their code.This way, we can identify developers who are consistently writing good/bad code.The goal is to have the reviewer spend about 30 minutes max, to do a review (I know this is subjective, considering the fact that the changeset/revision might include multiple files/huge changes to the existing architecture etc , but you get the general idea, the reviewer should not spend days reviewing someone's code) What other objective/quantifiable system do you follow to identify good/bad code written by developers? Book reference: Clean Code: A handbook of agile software craftmanship by Robert Martin

    Read the article

  • Standard Apache (not OHS) with mod_osso for Single Signon

    - by Markos Fragkakis
    The mod_osso.so (the Apache plugin for Single Signon, provided by Oracle) is distributed with the Oracle HTTP Server (OHS), which is essentially a modified Apache. I am trying to use it on the standard Apache HTTP Server, and have not managed to get it to work. Configuration: Apache 2.2.15 OHS from the Oracle Web Tier Tools 11.1.1.2.0 Red Hat Linux 64 bit I have: Included the module in the modules directory (copied from corresponding modules dir in OHS) Included the libraries libiau.so and libclutsh.so.11.1 from Oracle Home. The absence of these libraries produced an error on starting Apache. Produced a osso.conf using the ssoreg.sh tool provided with OID (the LDAP implementation of Oracle) Created the required mod_osso.conf file, which I included in httpd.conf. The error I get when starting Apache is this: # /opt/apache_sso/bin/apachectl -k start httpd: Syntax error on line 1075 of /opt/apache_sso/conf/httpd.conf: Syntax error on line 1 of /opt/apache_sso/conf/mod_osso.conf: Cannot load /opt/apache_sso/modules/mod_osso.so into server: /opt/apache_sso/modules/mod_osso.so: undefined symbol: _audit_authentication_request My mod_osso.conf: # cat /opt/apache_sso/conf/mod_osso.conf LoadModule osso_module modules/mod_osso.so <IfModule mod_osso.c> OssoIdleTimeout off OssoIpCheck on OssoConfigFile conf/osso.conf #Location is the URI you want to protect <Location /myapp> require valid-user #OHS 11g AuthType Osso #OHS 10g AuthType Basic AuthType Osso </Location> </IfModule> Has anyone made mod_osso work on standard Apache HTTP server?

    Read the article

  • LINQ-to-SQL: ExecuteQuery(Type, String) populates one field, but not the other

    - by Daniel Schaffer
    I've written an app that I use as an agent to query data from a database and automatically load it into my distributed web cache. I do this by specifying an sql query and a type in a configuration. The code that actually does the querying looks like this: List<Object> result = null; try { result = dc.ExecuteQuery(elementType, entry.Command).OfType<Object>().ToList(); } catch (Exception ex) { HandleException(ex, WebCacheAgentLogEvent.DatabaseExecutionError); continue; } elementType is a System.Type created from the type specified in the configuration (using Type.GetType()), and entry.Command is the SQL query. The specific entity type I'm having an issue with looks like this: public class FooCount { [Column(Name = "foo_id")] public Int32 FooId { get; set; } [Column(Name = "count")] public Int32 Count { get; set; } } The SQL query looks like this: select foo_id as foo_id, sum(count) as [count] from foo_aggregates group by foo_id order by foo_id For some reason, when the query is executed, the "Count" property ends up populated, but not the "FooId" property. I tried running the query myself, and the correct column names are returned, and the column names match up with what I've specified in my mapping attributes. Help!

    Read the article

  • [OffTopic] PhD is it worth it?

    - by Zenzen
    So I'll be graduating next year (hopefully) and lately I started thinking about getting a PhD in computer science (to be more precise I was thinking about something related to "distributed systems", don't have any specific topics yet in mind), but is it really worth it? A PhD course here lasts 4 years and is free IF you're doing the "normal one" (which means you have class from monday to friday) OR you have to pay if you want to study on the weekends. So I've been thinking about getting a normal full time job (as a JEE programmer) and doing my PhD during the weekends OR doing the normal PhD course (mon-fri) and getting a part time job as a software dev (the main reason I want a job is simple - I need to eat). That would give me practical working experience and theoretical knowledge+a PhD, but it would also mean working 7days a week from 9 to 5 for 4 years straight (sounds like an overkill). Is it, job wise, really worth getting a PhD? As an employer do you prefer people with PhDs or MScs? This might be kinda important - I'm not from the US or western europe, how are PhDs from other countries (I am graduating one of the best, if not the best, school in my country but it's still a central european country...) seen? Are they useless? I won't hide it - after graduation I am thinking about moving abroad.

    Read the article

  • System.Web.Caching vs. Enterprise Library Caching Block

    - by ESV
    For a .NET component that will be used in both web applications and rich client applications, there seem to be two obvious options for caching: System.Web.Caching or the Ent. Lib. Caching Block. What do you use? Why? System.Web.Caching Is this safe to use outside of web apps? I've seen mixed information, but I think the answer is maybe-kind-of-not-really. a KB article warning against 1.0 and 1.1 non web app use The 2.0 page has a comment that indicates it's OK: http://msdn.microsoft.com/en-us/library/system.web.caching.cache(VS.80).aspx Scott Hanselman is creeped out by the notion The 3.5 page includes a warning against such use Rob Howard encouraged use outside of web apps I don't expect to use one of its highlights, SqlCacheDependency, but the addition of CacheItemUpdateCallback in .NET 3.5 seems like a Really Good Thing. Enterprise Library Caching Application Block other blocks are already in use so the dependency already exists cache persistence isn't necessary; regenerating the cache on restart is OK Some cache items should always be available, but be refreshed periodically. For these items, getting a callback after an item has been removed is not very convenient. It looks like a client will have to just sleep and poll until the cache item is repopulated. Memcached for Win32 + .NET client What are the pros and cons when you don't need a distributed cache?

    Read the article

  • wcf configuration for this code

    - by user208081
    I have the following code and would like to convert a lot of code into configuration settings for WCF. As you can see, the code is using wshttpbinding. I appreciate any help on this. try { // Provides a unique network address that a client uses to communicate with a service endpoint. EndpointAddress endpointAddress = new EndpointAddress(new Uri(FAXServiceSettings.Default.FAXReceiveServiceURL)); // Specify the protocols, transports, and message encoders used for communication between the client and the service. // WSHttpBinding represents an interoperable binding that supports distributed transactions and secure, reliable sessions. // Spefically, SOAP message security is enabled for secure transmission of the message content. WSHttpBinding clientBinding = new WSHttpBinding(SecurityMode.Message); clientBinding.OpenTimeout = TimeSpan.FromSeconds(FAXServiceSettings.Default.FAXReceiveServiceOpenTimeoutInSeconds); clientBinding.SendTimeout = TimeSpan.FromSeconds(FAXServiceSettings.Default.FAXReceiveServiceOpenTimeoutInSeconds); // Use the ChannelFactory to enable the creation of channels to the binding and endpoint. using (ChannelFactory<IReceiveFAX> channelFactory = new ChannelFactory<IReceiveFAX>(clientBinding, endpointAddress)) { // Creates a channel of a specified type to a specified endpoint address. IReceiveFAX channel = channelFactory.CreateChannel(); if (channel != null) { try { // Submit the FaxSchedule instance for routing. channel.SubmitFAXForRouting(CreateNewFaxScheduleContainerInstance()); // Explicitly close the channel using the IClientChannel interface. CloseChannel((channel as IClientChannel)); } finally { // Explicitly dispose of the channel using IDisposable interface. DisposeOfChannel((channel as IDisposable)); channel = null; } } // This method causes a CommunicationObject to gracefully transition from any state, other than the Closed state, into the Closed state. The Close method allows any // unfinished work to be completed before returning. For example, finish sending any buffered messages. channelFactory.Close(); } } catch { throw; } Pratik

    Read the article

  • SVN Workflow - Chicken Before the Egg - Before merging V1 with V2, I need code from V1 to work on V2

    - by Jake
    Hi, Our distributed team (3 internal devs and 3+ external devs) use SVN to manage our codebase for a web site. We have a branch for each minor version (4.1.0, 4.1.1, 4.1.2, etc...). We have a trunk which we merge each version into when we do a release and publish to our site. An example of the problem we are having is thus: A new feature is added, lets call it "Ability to Create A Project" to 4.1.1. Another feature which depends on the one in 4.1.1 is scheduled to go in 4.1.2, called "Ability to Add Tasks to Projects". So, on Monday, we say 4.1.1 is 'closed' and needs to be tested. Our remote developers usually will start working on features/tickets for 4.1.2 at this point. Throughout the week we will test 4.1.1 and fix any bugs and commit them back to 4.1.1. Then, on Friday or so, we will tag 4.1.1, merge it with trunk, and finally, merge it with 4.1.2. But, for the 4-5 days we are testing, 4.1.2 doesn't have the code from 4.1.1 that some of the new features for 4.1.2 depend on. So a dev who is adding the "Ability to Add Tasks To Projects" feature, doesn't have the "Ability to Create a Project" feature to build upon, and has to do some file copy shenanigans to be able to keep working on it. What could/should we do to smooth out this process? P.S. Apologies if this question has been asked before - I did search but couldn't find what I'm looking for.

    Read the article

  • Why is TransactionScope operation is not valid?

    - by Cragly
    I have a routine which uses a recursive loop to insert items into a SQL Server 2005 database The first call which initiates the loop is enclosed within a transaction using TransactionScope. When I first call ProcessItem the myItem data gets inserted into the database as expected. However when ProcessItem is called from either ProcessItemLinks or ProcessItemComments I get the following error. “The operation is not valid for the state of the transaction” I am running this in debug with VS 2008 on Windows 7 and have the MSDTC running to enable distributed transactions. The code below isn’t my production code but is set out exactly the same. The AddItemToDatabase is a method on a class I cannot modify and uses a standard ExecuteNonQuery() which creates a connection then closes and disposes once completed. I have looked at other posting on here and the internet and still cannot resolve this issue. Any help would be much appreciated. using (TransactionScope processItem = new TransactionScope()) { foreach (Item myItem in itemsList) { ProcessItem(myItem); } processItem.Complete(); } private void ProcessItem(Item myItem) { AddItemToDatabase(myItem); ProcessItemLinks(myItem); ProcessItemComments(myItem); } private void ProcessItemLinks(Item myItem) { foreach (Item link in myItem.Links) { ProcessItem(link); } } private void ProcessItemComments(Item myItem) { foreach (Item comment in myItem.Comments) { ProcessItem(comment); } } Here is top part of the stack trace. Unfortunatly I cant show the build up to this point as its company sensative information which I can not disclose. Hope this is enough information. at System.Transactions.TransactionState.EnlistPromotableSinglePhase(InternalTransaction tx, IPromotableSinglePhaseNotification promotableSinglePhaseNotification, Transaction atomicTransaction) at System.Transactions.Transaction.EnlistPromotableSinglePhase(IPromotableSinglePhaseNotification promotableSinglePhaseNotification) at System.Data.SqlClient.SqlInternalConnection.EnlistNonNull(Transaction tx) at System.Data.SqlClient.SqlInternalConnection.Enlist(Transaction tx) at System.Data.SqlClient.SqlInternalConnectionTds.Activate(Transaction transaction) at System.Data.ProviderBase.DbConnectionInternal.ActivateConnection(Transaction transaction) at System.Data.ProviderBase.DbConnectionPool.GetConnection(DbConnection owningObject) at System.Data.ProviderBase.DbConnectionFactory.GetConnection(DbConnection owningConnection) at System.Data.ProviderBase.DbConnectionClosed.OpenConnection(DbConnection outerConnection, DbConnectionFactory connectionFactory) at System.Data.SqlClient.SqlConnection.Open()

    Read the article

  • Problem with Tapestry palette's arrow icons in IE8

    - by JellyHead
    I'm using Tapestry to create pages for a web app, and have been using the palette component to add/delete items to/from a group. The page looks great in Firefox (Tapestry seems biased towards Firefox), but my customers will all be using Internet Explorer (any versions from 6, 7, & 8) and in IE8, the disabled arrow buttons look awful. In Firefox, they are faded, using an opacity setting of 25%, but this doesn't work in IE8 and instead you get a faded image with an ugly black border around the image. In tapestry-core's stylesheet (default.css), you have the following for a disabled arrow button. DIV.t-palette-controls BUTTON[disabled] IMG { filter: alpha(opacity = 25); -moz-opacity: .25; } These are clearly out of date, as -moz-opacity is no longer supported by Firefox (use opacity: 25 instead). The problem is with filter: "alpha(opacity = 25);". If I remove this, the arrows look fine in IE8, but they are not faded. I got the magic instruction: -ms-filter:"progid:DXImageTransform.Microsoft.Alpha(opacity=25)"; from various websites, but putting this in does not work either - the arrow icons are ugly again. The icon itself (distributed with Tapestry) just seems to be a regular PNG, but I'm not an expert on image formats, so maybe there's a problem there? Anyone else had this problem?

    Read the article

  • Converting from CVS to SVN to Hg, does 'hg convert' require pointing to an SVN checkout or just a re

    - by Terry
    As part of migrating full CVS history to Hg, I've used cvs2svn to create an SVN repo in a local directory. It's first level directory structure is: 2010-04-21 09:39 AM <DIR> . 2010-04-21 09:39 AM <DIR> .. 2010-04-21 09:39 AM <DIR> locks 2010-04-21 09:39 AM <DIR> hooks 2010-04-21 09:39 AM <DIR> conf 2010-04-21 09:39 AM 229 README.txt 2010-04-21 11:45 AM <DIR> db 2010-04-21 09:39 AM 2 format 2 File(s) 231 bytes After setting up hg and the convert extension and attempting the convert, I get the following on convert: C:\>hg convert file://localhost/Users/terry/Desktop/repoSVN assuming destination repoSVN-hg initializing destination repoSVN-hg repository file://localhost/Users/terry/Desktop/repoSVN does not look like a CVS checkout file://localhost/Users/terry/Desktop/repoSVN does not look like a Git repo file://localhost/Users/terry/Desktop/repoSVN does not look like a Subversion repo file://localhost/Users/terry/Desktop/repoSVN is not a local Mercurial repo file://localhost/Users/terry/Desktop/repoSVN does not look like a darcs repo file://localhost/Users/terry/Desktop/repoSVN does not look like a monotone repo file://localhost/Users/terry/Desktop/repoSVN does not look like a GNU Arch repo file://localhost/Users/terry/Desktop/repoSVN does not look like a Bazaar repo file://localhost/Users/terry/Desktop/repoSVN does not look like a P4 repo abort: file://localhost/Users/terry/Desktop/repoSVN: missing or unsupported repository I have TortoiseHg installed. For info, hg version reports: Mercurial Distributed SCM (version 1.4.3) This version of Mercurial seems to have some svn bindings if library.zip in the install is to be believed. Do I need to do a checkout and point hg convert to it for this to work properly?

    Read the article

  • Maven: compile aspectj project containing Java 1.6 source

    - by gmale
    What I want to do is fairly easy. Or so you would think. However, nothing is working properly. Requirement: Using maven, compile Java 1.6 project using AspectJ compiler. Note: Our code cannot compile with javac. That is, it fails compilation if aspects are not woven in (because we have aspects that soften exceptions). Questions (based on failed attempts below): Either 1) How do you get maven to run the aspectj:compile goal directly, without ever running compile:compile? 2) How do you specify a custom compilerId that points to your own ajc compiler? Thanks for any and all suggestions. These are the things I've tried that have let to my problem/questions: Attempt 1 (fail): Specify aspectJ as the compiler for the maven-compiler-plugin: org.apache.maven.plugins maven-compiler-plugin 2.2 1.6 1.6 aspectj org.codehaus.plexus plexus-compiler-aspectj 1.8 This fails with the error: org.codehaus.plexus.compiler.CompilerException: The source version was not recognized: 1.6 No matter what version of the plexus compiler I use (1.8, 1.6, 1.3, etc), this doesn't work. I actually read through the source code and found that this compiler does not like source code above Java 1.5. Attempt 2 (fail): Use the aspectJ-maven-plugin attached to the compile and test-compile goals: org.codehaus.mojo aspectj-maven-plugin 1.3 1.6 1.6 compile test-compile This fails when running either: mvn clean test-compile mvn clean compile because it attempts to execute compile:compile before running aspectj:compile. As noted above, our code doesn't compile with javac--the aspects are required. So mvn would need to skip the compile:compile goal altogether and run only aspectj:compile. Attempt 3 (works but unnacceptable): Use the same configuration above but instead run: mvn clean aspectj:compile This works, in that it builds successfully but it's unacceptable in that we need to be able to run the compile goal and the test-compile goal directly (m2eclipse auto-build depends on those goals). Moreover, running it this way would require that we spell out every goal we want along the way (for instance, we need resources distributed and tests to be run and test resources deployed, etc)

    Read the article

  • Parsing string, with Boost Spirit 2, to fill data in user defined struct

    - by Surya
    I'm using Boost.Spirit which was distributed with Boost-1.42.0 with VS2005. My problem is like this. I've this string which was delimted with commas. The first 3 fields of it are strings and rest are numbers. like this. String1,String2,String3,12.0,12.1,13.0,13.1,12.4 My rule is like this qi::rule<string::iterator, qi::skip_type> stringrule = *(char_ - ',') qi::rule<string::iterator, qi::skip_type> myrule= repeat(3)[*(char_ - ',') >> ','] >> (double_ % ',') ; I'm trying to store the data in a structure like this. struct MyStruct { vector<string> stringVector ; vector<double> doubleVector ; } ; MyStruct var ; I've wrapped it in BOOST_FUSION_ADAPT_STRUCTURE to use it with spirit. BOOST_FUSION_ADAPT_STRUCT (MyStruct, (vector<string>, stringVector) (vector<double>, doubleVector)) My parse function parses the line and returns true and after qi::phrase_parse (iterBegin, iterEnd, myrule, boost::spirit::ascii::space, var) ; I'm expecting var.stringVector and var.doubleVector are properly filled. but it is not the case. What is going wrong ? Thanks in advance, Surya

    Read the article

  • Non-Relational Database Design

    - by Ian Varley
    I'm interested in hearing about design strategies you have used with non-relational "nosql" databases - that is, the (mostly new) class of data stores that don't use traditional relational design or SQL (such as Hypertable, CouchDB, SimpleDB, Google App Engine datastore, Voldemort, Cassandra, SQL Data Services, etc.). They're also often referred to as "key/value stores", and at base they act like giant distributed persistent hash tables. Specifically, I want to learn about the differences in conceptual data design with these new databases. What's easier, what's harder, what can't be done at all? Have you come up with alternate designs that work much better in the non-relational world? Have you hit your head against anything that seems impossible? Have you bridged the gap with any design patterns, e.g. to translate from one to the other? Do you even do explicit data models at all now (e.g. in UML) or have you chucked them entirely in favor of semi-structured / document-oriented data blobs? Do you miss any of the major extra services that RDBMSes provide, like relational integrity, arbitrarily complex transaction support, triggers, etc? I come from a SQL relational DB background, so normalization is in my blood. That said, I get the advantages of non-relational databases for simplicity and scaling, and my gut tells me that there has to be a richer overlap of design capabilities. What have you done? FYI, there have been StackOverflow discussions on similar topics here: the next generation of databases changing schemas to work with Google App Engine choosing a document-oriented database

    Read the article

  • Optimized Publish/Subcribe JMS Broker Cluster and Conflicting Posts on StackOverFlow for the Answer

    - by Gene
    Hi, I am looking to build a publish/subscribe distributed messaging framework that can manage huge volumes of message traffic with some intelligence at the broker level. I don't know if there's a topology that describes this, but this is the model I'm going after: EXAMPLE MODEL A A) There are two running message brokers (ideally all on localhost if possible, for easier demo-ing) : Broker-A Broker-B B) Each broker will have 2 listeners and 1 publisher. Example Figure [subscriber A1, subscriber A2, publisher A1] <-- BrokerA <-- BrokerB <-- [publisher B1, subscriber B1, subscriber B2] IF a message-X is published to broker A and there no subscribers for it among the listeners on Broker-B (via criteria in Message Selectors or Broker routing rules), then that message-X will never be published to Broker-B. ELSE, broker A will publish the message to broker B, where one of the broker B listeners/subscribers/services is expecting that message based on the subscription criteria. Is Clustering the Correct Approach? At first, I concluded that the "Broker Clustering" concept is what I needed to support this. However, as I have come to understand it, the typical use of clustering entails either: message redundancy across all brokers ... or Competing Consumers pattern ... and neither of these satisfy the requirement in the EXAMPLE MODEL A. What is the Correct Approach? My question is, does anyone know of a JMS implementation that supports the model I described? I scanned through all the stackoverflow post titles for the search: JMS and Cluster. I found these two informative, but seemingly conflicting posts: Says the EXAMPLE MODEL A is/should-be implicitly supported: http://stackoverflow.com/questions/2255816/jms-consumer-with-activemq-network-of-brokers " this means you pick a broker, connect to it, and let the broker network sort it out amongst themselves. In theory." Says the EXAMPLE MODEL A IS NOT suported: http://stackoverflow.com/questions/2017520/how-does-a-jms-topic-subscriber-in-a-clustered-application-server-recieve-message "All the instances of PropertiesSubscriber running on different app servers WILL get that message." Any suggestions would be greatly appreciated. Thanks very much for reading my post, Gene

    Read the article

  • Managing common components with Fossil CVS

    - by Larry Lustig
    I'm a Fossil (and CVS configuration novice) attempting to create and manage a set of distributed Fossil repositories for a Delphi project. I have the following directory tree: Projects Some Project Delphi Components LookupListView Some Client Some Project For Client Some Other Project For Client Source Code Project Resources Project Database I am setting up Fossil version control in order to version and share Projects\Some Client\Some Other Project For Client\Source Code, which contains Delphi 2010 source for a database project. This project makes use of Projects\Delphi Components\LookupListView which is a Delphi component. I need this code to be included in the versioning system for my project. I will, in theory, need to include it in other Fossil repositories in the future, as well. If I create my Fossil repository at the Source Code or Some Other Project For Client level, I cannot add any code above that level to my repository. What is the proper way to deal with this? The two solutions that occur to me are 1) Creating a separate repository for LookupListView and make sure that everyone who uses a repository for a project that references it "knows" that they must also get the current version of this project as well. This seems to defeat the purpose of being able to obtain a complete, current version of the project with a single checkout. The problem is magnified because there are other common component dependencies in this project. 2) Establishing my Fossil repository in the Projects directory, so I can check in files from various subfolders. This seems to me to involve an awful lot of extra path-typing when doing adds, and also to impose my directory structure (Some Client\Some Other Project For Client\Source) on the other users of the repository -- in this case, the actual client. Any suggestions appreciated.

    Read the article

< Previous Page | 223 224 225 226 227 228 229 230 231 232 233 234  | Next Page >