Search Results

Search found 23347 results on 934 pages for 'salesforce service cloud'.

Page 510/934 | < Previous Page | 506 507 508 509 510 511 512 513 514 515 516 517  | Next Page >

  • ubuntu 12.04 copy whole server

    - by Jiechao Li
    all. My company host all the software, including application and website in one local ubuntu server. But recently we start to use Amazon EC2, and want to move the whole server to cloud. Is there anyway to copy the entire server to EC2? OS is the same, ubuntu 12.04, and just one server, and one user account. On EC2 is also one instance, one account. I don't know whether there is a simple and quick way to do that. Thanks a lot!!

    Read the article

  • Missing line number in stack trace eventhough the PDB files are included

    - by Farzad
    This is running me nuts. I have this web service implemented w/ C# using VS 2008. I publish it on IIS. I have modified the release build so the pdb files are copied along with the dlls into the target directory on inetpub. Also web.config file has debug=true. Then I call a web service that throws an exception. The stack trace does not contain the line numbers. I have no idea what I am missing here, any ideas? Additional Info: If I run the web app using VS built-in web server, it works and I get line numbers in stack trace. But if I copy the same files (pdb and dll) that the VS built-in web server is using to IIS, still the line numbers are missing in stack trace. It seems that there is something related to the IIS that ignores the pdb files! Update When I publish to IIS, all the pdb files are published under the bin directory and everything looks fine. But when I go to "C:\Windows\Microsoft.NET\Framework\v2.0.50727\Temporary ASP.NET Files" under the specific directory related to my project, I can see that the assembly (.dll) files are all there, but there is no pdb files. But this does not happen if I run the project using VS built-in web server. So if I copy the pdb files manually to the temp folder, I can see the line numbers. Any idea why the pdb files are not copied to the temp folder? BTW, when I attach to the worker process I can see that it says Symbols loaded!

    Read the article

  • WCF InProcFactory error

    - by Terence Lewis
    I'm using IDesign's ServiceModelEx assembly to provide additional functionality over and above what's available in standard WCF. In particular I'm making use of InProcFactory to host some WCF services within my process using Named Pipes. However, my process also declares a TCP endpoint in its configuration file, which I host and open when the process starts. At some later point, when I try to host a second instance of this service using the InProcFactory through the named pipe (from a different service in the same process), for some reason it picks up the TCP endpoint in the configuration file and tries to re-host this endpoint, which throws an exception as the TCP port is already in use from the first hosting. Here is the relevant code from InProcFactory.cs in ServiceModelEx: static HostRecord GetHostRecord<S,I>() where I : class where S : class,I { HostRecord hostRecord; if(m_Hosts.ContainsKey(typeof(S))) { hostRecord = m_Hosts[typeof(S)]; } else { ServiceHost<S> host; if(m_Singletons.ContainsKey(typeof(S))) { S singleton = m_Singletons[typeof(S)] as S; Debug.Assert(singleton != null); host = new ServiceHost<S>(singleton,BaseAddress); } else { host = new ServiceHost<S>(BaseAddress); } string address = BaseAddress.ToString() + Guid.NewGuid().ToString(); hostRecord = new HostRecord(host,address); m_Hosts.Add(typeof(S),hostRecord); host.AddServiceEndpoint(typeof(I),Binding,address); if(m_Throttles.ContainsKey(typeof(S))) { host.SetThrottle(m_Throttles[typeof(S)]); } // This line fails because it tries to open two endpoints, instead of just the named-pipe one host.Open(); } return hostRecord; }

    Read the article

  • NFS failover WITHOUT DRBD?

    - by user439407
    So I am trying to set up a redundant NFS share in a cloud environment(all links internal, half gig links), and I am looking into using heartbeat for failover, but all the guides seem to be about combining DRBD and heartbeat to create a robust environment. If need be I can do that, but since my content is almost completely static, I would like to avoid the extra overhead and complexity of DRBD if possible, but still be able to fail over if one of the NFS servers fails. Is it possible to use heartbeat with NFS to achieve high-availability without using DRBD to copy the blocks? I am not married to NFSv4, so if NFSv3 over UDP is necessary, that won't be a problem(only a very small number of clients will be connecting to the share) Any comments are appreciated.

    Read the article

  • Using EJB in Wicket WebPage

    - by Errandir
    When I'm using @EJB annotation to access stateless EJB through remote interface in common HttpServlet, it works OK: public class ListMsgs extends HttpServlet { @EJB private Msgs msgsRI; ... protected void processRequest(...) ... { List msgs = msgsRI.getAll(); ... } ... } But when I'm trying the same thing in Wicket WebPage, I'm getting null in return for bean: public class ListM extends WebPage { @EJB private Msgs msgsRI; ... public ListM() { List msgs = msgsRI.getAll(); // NullPointerException ... } ... } The several lines of this “Unexpected RuntimeException” are: WicketMessage: Can't instantiate page using constructor public testapp.web.ListM() Root cause: java.lang.NullPointerException at testapp.web.ListM.<init>(ListM.java:22) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at java.lang.reflect.Constructor.newInstance(Constructor.java:513) at org.apache.wicket.session.DefaultPageFactory.createPage(DefaultPageFactory.java:192) at org.apache.wicket.session.DefaultPageFactory.newPage(DefaultPageFactory.java:57) at org.apache.wicket.request.target.component.BookmarkablePageRequestTarget.newPage(BookmarkablePageRequestTarget.java:298) at org.apache.wicket.request.target.component.BookmarkablePageRequestTarget.getPage(BookmarkablePageRequestTarget.java:320) at org.apache.wicket.request.target.component.BookmarkablePageRequestTarget.processEvents(BookmarkablePageRequestTarget.java:234) at org.apache.wicket.request.AbstractRequestCycleProcessor.processEvents(AbstractRequestCycleProcessor.java:92) at org.apache.wicket.RequestCycle.processEventsAndRespond(RequestCycle.java:1250) at org.apache.wicket.RequestCycle.step(RequestCycle.java:1329) at org.apache.wicket.RequestCycle.steps(RequestCycle.java:1428) at org.apache.wicket.RequestCycle.request(RequestCycle.java:545) at org.apache.wicket.protocol.http.WicketFilter.doGet(WicketFilter.java:479) at org.apache.wicket.protocol.http.WicketServlet.doGet(WicketServlet.java:138) at javax.servlet.http.HttpServlet.service(HttpServlet.java:734) at javax.servlet.http.HttpServlet.service(HttpServlet.java:847) .... There are ejb-module with bean and web-module with servlet and wicket web page deployed to GlassFish v2.1.1 server (if it makes any sense). What should I do to use my enterprise bean through remote interface in wicket webpage?

    Read the article

  • Why do I get "Permission denied (publickey)" when trying to SSH from local Ubuntu to a Amazon EC2 server?

    - by Vorleak Chy
    I have an instance of an application running in the cloud on Amazon EC2 instance, and I need to connect it from my local Ubuntu. It works fine on one of local ubuntu and also laptop. I got message "Permission denied (publickey)" when trying to access SSH to EC2 on another local Ubuntu. It's so strange to me. I'm thinking some sort of problems with security settings on the Amazon EC2 which has limited IPs access to one instance or certificate may need to regenerate. Does anyone know a solution?

    Read the article

  • Why does my REST request return garbage data?

    - by Alienfluid
    I am trying to use LWP::Simple to make a GET request to a REST service. Here's the simple code: use LWP::Simple; $uri = "http://api.stackoverflow.com/0.8/questions/tagged/php"; $jsonresponse= get $uri; print $jsonresponse; On my local machine, running Ubuntu 10.4, and Perl version 5.10.1: farhan@farhan-lnx:~$ perl --version This is perl, v5.10.1 (*) built for x86_64-linux-gnu-thread-multi I can get the correct response and have it printed on the screen. E.g.: farhan@farhan-lnx:~$ head -10 output.txt { "total": 1000, "page": 1, "pagesize": 30, "questions": [ { "tags": [ "php", "arrays", "coding-style" (... snipped ...) But on my host's machine to which I SSH into, I get garbage printed on the screen for the same exact code. I am assuming it has something to do with the encoding, but the REST service does not return the character set type in the response, so how do I force LWP::Simple to use the correct encoding? Any ideas what may be going on here? Here's the version of Perl on my host's machine: [dredd]$ perl --version This is perl, v5.8.8 built for x86_64-linux-gnu-thread-multi

    Read the article

  • Timeout reading verity collection - CF8

    - by Gary
    For a long time now I've been having a problem with using the verity search service bundled with ColdFusion 8. The issue is with timeout errors occurring when perfoming any operation on a collection. It's intermittent, and usually occurs after a few operations have been successfully performed. For instance: If I'm adding records to a collection the first, say 15 records, will go through with no problems, but all subsequent records will timeout until the service is rebooted. I'm on a shared server, Windows 2008, 64bit as far as I know. The error I receive is: "An error occurred while performing an operation in the Search Engine library. Error reading collection information.: com.verity.api.administration.ConfigurationException: java.io.IOException: Read timed out" Having spoken to my hosting company, and after doing some research, it's been suggested that the number of collections on a server may cause this issue. I've reduced the amount of collections I use, and there are currently 39 collections on the server. As I'm on a shared server, I have no control over how many collections other customers use, however I've read that the limit is 128 collections, so I don't see why 39 should cause it to become unusable. The collections aren't big, there's maybe around 5,000 records between all of them. Any ideas?

    Read the article

  • Blackberry MDS simulator - Can't connect to the internet in the simulator.

    - by bcoyour
    I'm trying to do some testing of a website through the Blackberry simulator, while the simulator works fine, I can't get to any sites in the Blackberry Browser. Here is the specific setup I'm using. I'm Windows 7 (64-bit) Home Edition I have the latest (at the time) MDS installation - BlackBerry Email and MDS Services Simulators 4.1.4 Finally, I have the latest (at the time) Blackberry Simulator - BlackBerry Smartphone Simulators 5.0.0 (5.0.0.442) - 9700 I first start the MDS service, it briefly pops up the command-prompt and then closes it. I'm assuming that when it does that, it started the MDS service. Then I open the Blackberry simulator (9700), which opens up fine and loads the Blackberry OS. Then with the Blackberry OS all loaded up, I navigate to the browser and for example type www.google.com and then at the bottom it just says "sending request" and loads for about a minute. Then times out and says it can't find a connection. Anyone have any thoughts on what I'm missing? Or, does anyone know of an online simulator for the Blackberry, because thus far this has been a huge pain for testing sites on a Blackberry. Thank you! Ben

    Read the article

  • Using Apache Camel how do I unmarshal my deserialized object that comes in through a CXF Endpoint?

    - by ScArcher2
    I have a very simple camel route. It starts with a CXF Endpoint exposed as a web service. I then want to convert it to xml and call a method on a bean. Currently i'm getting a CXF specific object after the web service call. How do I take my serialized object out of the CXF MessageList and use it going forward? My Route: <camel:route> <camel:from uri="cxf:bean:helloEndpoint" /> <camel:marshal ref="xstream-utf8" /> <camel:to uri="bean:hello?method=hello"/> </camel:route> The XML Serialized Message: <?xml version='1.0' encoding='UTF-8'?> <org.apache.cxf.message.MessageContentsList serialization="custom"> <unserializable-parents /> <list> <default> <size>1</size> </default> <int>6</int> <com.whatever.Person> <firstName>Joe</firstName> <middleName></middleName> <lastName>Buddah</lastName> <dateOfBirth>2010-04-13 12:09:00.137 CDT</dateOfBirth> </com.whatever.Person> </list> </org.apache.cxf.message.MessageContentsList> I would expect the XML to be more like this: <com.whatever.Person> <firstName>Joe</firstName> <middleName></middleName> <lastName>Buddah</lastName> <dateOfBirth>2010-04-13 12:09:00.137 CDT</dateOfBirth> </com.whatever.Person>

    Read the article

  • xsd.exe - schema to class - for use with WCF

    - by NealWalters
    I have created a schema as an agreed upon interface between our company and an external company. I am now creating a WCF C# web service to handle the interface. I ran the XSD utility and it created a C# class. The schema was built in BizTalk, and references other schemas, so all-in-all there are over 15 classes being generated. I put [DataContract} attribute in front of each of the classes. Do I have to put the [DataMember] attribute on every single property? When I generate a test client program, the proxy does not have any code for any of these 15 classes. We used to use this technique when using .asmx services, but not sure if it will work the same with WCF. If we change the schema, we would want to regenerate the WCF class, and then we would haev to each time redecorate it with all the [DataMember] attributes? Is there an newer tool similar to XSD.exe that will work better with WCF? Thanks, Neal Walters SOLUTION (buried in one of Saunders answer/comments): Add the XmlSerializerFormat to the Interface definition: [OperationContract] [XmlSerializerFormat] // ADD THIS LINE Transaction SubmitTransaction(Transaction transactionIn); Two notes: 1) After I did this, I saw a lot more .xsds in the my proxy (Service Reference) test client program, but I didn't see the new classes in my intellisense. 2) For some reason, until I did a build on the project, I didn't get all the classes in the intellisense (not sure why).

    Read the article

  • Are there any Pandora/Slacker like applications to create stations/play lists from personal mp3 library?

    - by Randy K
    I'm interested in having playlists created automatically much in the same way that Pandora and Slacker Radio "create" radio channels. I understand that iTunes has the Genius feature, but this requires that the files have been encoded with iTunes, which is not the case with my music, among other reasons I'm not considering iTunes. I'm running an Windows environment, but would consider Linux options as it would give me a reason to learn more about Linux. In the end the music files and play lists will end up on my Android phone and tablets. Working with Amazon's Cloud Player would be nice, but not required.

    Read the article

  • How to code a URL shortener?

    - by marco92w
    I want to create a URL shortener service where you can write a long URL into an input field and the service shortens the URL to "http://www.example.org/abcdef". Instead of "abcdef" there can be any other string with six characters containing a-z, A-Z and 0-9. That makes 56 trillion possible strings. My approach: I have a database table with three columns: id, integer, auto-increment long, string, the long URL the user entered short, string, the shortened URL (or just the six characters) I would then insert the long URL into the table. Then I would select the auto-increment value for "id" and build a hash of it. This hash should then be inserted as "short". But what sort of hash should I build? Hash algorithms like MD5 create too long strings. I don't use these algorithms, I think. A self-built algorithm will work, too. My idea: For "http://www.google.de/" I get the auto-increment id 239472. Then I do the following steps: short = ''; if divisible by 2, add "a"+the result to short if divisible by 3, add "b"+the result to short ... until I have divisors for a-z and A-Z. That could be repeated until the number isn't divisible any more. Do you think this is a good approach? Do you have a better idea?

    Read the article

  • Amazon EC2 and jbossws

    - by avjaz
    Hi - I've deployed a webservice to a Jboss instance running on Amazon EC2. The webservice works fine locally, but when I deploy on EC2, and go to the /jbossws/services page the Endpoint Address for the webservice is the private DNS of the ec2 instance (domU-X-X-X-X etc...), not the public dns (which I would like it to be). I've tried loading the wsdl by changing the private hostname to the public IP; that works, but when I try to call any of the operations I get a HostNotFoundException, I'm guessing due to the fact that the generated wsdl has the stanza: <service name='XXXService'> <port binding='tns:XXXBinding' name='XXXPort'> <soap:address location='http://domU-XX-XX-XX-XX-XX-XX.compute-1.internal:8080/xx/xx/xx'/> </port> </service> where http://domU-XX-XX-XX-XX-XX-XX.compute-1.internal is the internal dns of the ec2 instance. The wsdl is auto generated - Is there a JAXB annotation I can use so that I can force the generated wsdl to use the public dns of the EC2 instance? Many thanks -

    Read the article

  • SqlException: User does not have permission to perform this action.

    - by Oskar Kjellin
    I have been using my website (ASP.NET MVC) in visual studio but now I want to host it on my server. I published from visual studio onto the network share to be used. The server is running Windows Home Server, IIS 6 and SQL Server 2008 R2 (express). In Microsoft SQL Server Management Studio, I've attached the database and made sure that the user IUSR_SERVER is owner of the db. I also made sure that the user Network Service has access. The Web Site is configured in IIS to run anonymously as IUSR_SERVER. I have granted write and read access to IUSR_SERVER as well as Network Service and made sure that nothing is read only. The web.config has this connectionstring: <connectionStrings> <remove name="ApplicationServices" /> <add name="ApplicationServices" connectionString="Data Source=.\SQLExpress;Integrated Security=True;Initial Catalog=MyDatebase" providerName="System.Data.SqlClient" /> </connectionStrings> However, I cannot browse my web site. I only get this error: Server Error in '/' Application. User does not have permission to perform this action. Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code. Exception Details: System.Data.SqlClient.SqlException: User does not have permission to perform this action. Source Error: An unhandled exception was generated during the execution of the current web request. Information regarding the origin and location of the exception can be identified using the exception stack trace below. Stack Trace: [SqlException (0x80131904): User does not have permission to perform this action.] System.Data.SqlClient.SqlInternalConnection.OnError(SqlException exception, Boolean breakConnection) +4846887 System.Data.SqlClient.TdsParser.ThrowExceptionAndWarning(TdsParserStateObject stateObj) +194 Feels like I've tried everything. Would be very grateful for your aid in this.

    Read the article

  • PerformancePoint dashboard permissions problem in MOSS

    - by Nathan DeWitt
    I have a PerformancePoint dashboard running in MOSS 2007 portal. The dashboard consists of one SSRS 2005 report, running in SharePoint Integrated mode. NT Authority\Authenticated Users have read permissions to the report library containing the SSRS report, the dashboard, and the report library containing the dashboard. Users that attempt to access the dashboard receive the following error message: The permissions granted to user 'DOMAIN\firstname.lastname' are insufficient for performing this operation. (rsAccessDenied) Users that then click on the direct link to the report in MOSS will see the report with no problem. Subsequent visits to the dashboard show the report with no problem. The report is using a data source that is located one folder up from the report location. The report has been updated to point to the correct shared data source after deployment. Both the report and the data source have been published. The data source is using stored credentials, with a domain service account that has been set to Use as Windows credentials. This service account is serving other reports in other areas with no problem. Edit: Ok, I've gotten a lot more information on this problem. The request is never actually being made to the data source. The user comes in to the dashboard and requests a report for the first time using their kerberos token identifying themselves. The report looks in the Report Server database and finds that they are not listed in the users table and generates this rsAccessDenied error. Once they view the report directly their name is in this table and they never have the problem again. Unfortunately, removing the user from the Users table in the RS database doesn't actually cause this error to happen again. Everything I've read says that when you run a Report Server in MOSS integrated mode all your permissions are handled at the MOSS report library level, and all Auth users have permissions to the report library, as stated earlier. Any ideas?

    Read the article

  • Migrating to web application with AjaxControlToolkit

    - by Chris
    Hi All, We are currently migrating our ASP.NET website to a web application in Visual Studio 2008. Most of the process has been fairly straight forward, but I have hit one block that is driving me a bit nuts. We are using the AjaxControlToolkit for some functionality, specifically an AutoControlExtender. When this is run locally through VS's development server the extender (dropdown) does not render after the service returns the resultset. However if I deploy the migrated solution to our UAT server the extender functions correctly. I have ensured the Ajax Control Toolkit is properly installed locally on my dev machine (and the dll available in the bin directory), and using debugging have ensured the service is called correctly and runs through without error (which it does). The web application was taken from a server running IIS7. Can anyone confirm if Visual Studio 2008 development server requires a different configuration to IIS 7 (as I believe IIS 6 requires a different configuration to IIS 7), and if there is a resource that provides more info? My own searches have turned up very little in this area. On the other hand, if I am looking in the wrong area any other tips would be appreciated. Thanks Chris

    Read the article

  • Different versions of JBoss on the same host

    - by Vladimir Bezugliy
    I have JBoss 4 installed on my PC to directory C:\JBoss4 And environment variable JBOSS_HOME set to this directory: JBOSS_HOME=C:\JBoss4 I need to install JBoss 5.1 on the same PC. I installed it into C:\JBoss51 In order to start JBoss 5.1 on the same host where JBoss 4 was already started, I need to redefine properties jboss.home.dir, jboss.home.url, jboss.service.binding.set: C:\JBoss51\bin\run.sh -Djboss.home.dir=C:/JBoss51 \ -Djboss.home.url=file:/C:/JBoss51 \ -Djboss.service.binding.set=ports-01 But in C:\JBoss51\bin\run.sh I can see following code: … if [ "x$JBOSS_HOME" = "x" ]; then # get the full path (without any relative bits) JBOSS_HOME=`cd $DIRNAME/..; pwd` fi export JBOSS_HOME … runjar="$JBOSS_HOME/bin/run.jar" JBOSS_BOOT_CLASSPATH="$runjar" And this code does not depend either on jboss.home.dir or on jboss.home.dir. So when I start JBoss 5.1 script will use jar files from JBoss 4.3? Is it correct? Should I redefine environment variable JAVA_HOME when I start JBoss 5.1? In this case script will use correct jar files. Or if I redefined properties jboss.home.dir, jboss.home.url then JBoss will not use any variables set in run.sh? How does it works?

    Read the article

  • Having trouble coming up with a good architecture for a client/server application

    - by rmw1985
    I am writing a remote backup service meant to support 1000+ users. It is going to use librsync to store reverse diffs (like rdiff-backup) and make data transfer efficient. My trouble is that I do not know the "best" way to implement the client/server model. I have thought of doing it like rsync/rdiff-backup do it by having the client open an SSH connection and running a server executable and communicating across pipes. Another alternative would be to write a server which would handle authentication and communicate with the client via SSL. The reason I have thought of this is that there is "state" information like how many backup jobs are setup, etc. that must be maintained. Another alternative that I have thought about is running a "web service" using Pylons or Django to handle the authentication, but I do not know how to bridge that the the "storage" side. Since I am using librsync, I cannot use "dumb" storage. Is there a way to pipe data through Pylons or Django to a server side handler that would do the rsync calculation? This seems to me like maybe a dumb question but I am sort of lost. Any tips or suggestions from more experienced developers would be extremely helpful.

    Read the article

  • Should EC2 server self register or have admin server

    - by hortitude
    I'm creating AWS servers using chef. I am also planning on enabling auto-scaling. While we have automatic monitoring setup already (server density or nagios etc), I was also going to setup the cloud watch alarm for the status check on the server. This led me down the path of trying to decide whether to install the ec2-command-line tools on the server itself (which then requires me to install java on the servers -- despite no other need for Java in our environment) or to possible have an "admin" box that will check periodically for servers and make sure they have their alarms set. I expect this paradigm to carry over to other things we want to configure (perhaps ensuring that termination protection is setup on production servers?). Any thoughts as to why to go one direction or another?

    Read the article

  • PLesk Error: pmm-ras error (Error code = -6): during restore.. /tmp folder to increase?

    - by eric
    I had to re-install plesk on a centos 6 system after a crash. The full backup file is 11 gb. but at the beginning of the backup reinstall I get the error Error: pmm-ras error (Error code = -6): argh ! my disk organization is like this Filesystem Size Used Avail Use% Mounted on /dev/xvda1 3.7G 801M 2.9G 22% / /dev/mapper/vg00-usr 14G 1.5G 12G 12% /usr /dev/mapper/vg00-var 155G 14G 134G 10% /var /dev/mapper/vg00-home 3.9G 136M 3.6G 4% /home none 1000M 7.5M 993M 1% /tmp I suppose I have to increase my /tmp folder to accept the backup size,but I don't know how-to. I'm on 1&1 cloud server Thanks for your help. You can imagine the emergency of this situation...

    Read the article

  • hosting environment for delivering FLVs

    - by Gotys
    What would be the ideal hardware setup for pushing lots of bandwith on a tube site? We have ever-expanding cloud storage where users upload the movies, then we have these web-delivery machines which cache the FLV files on its local harddrives and deliver them to users. Each cache machine can deliver 1200 mbits/s , if it has SAS 8 harddrives. Such a cache machine costs us $550/month for 8x160gb -- so each machine can cache only 160GB at any given time. If we want to cache more then 160gb , we need to add another machine..another $550/month..etc. This is very un-economical so I am wondering if we have any experts here who can figure out a better setup. I've been looking into "gluster FS", but I am not sure if this thing can push a lot of bandwith. Any ideas highly appreciated. Thank you!

    Read the article

  • Collection is empty when it arrives on the client

    - by digiduck
    One of my entities has an EntitySet< property with [Composition], [Include] and [Association] attributes. I populate this collection in my domain service but when I check its contents when it is received on the client, the collection is empty. I am using Silverlight 4 RTM as well as RIA Services 1.0 RTM. Any ideas what I am doing wrong? Here is the code on my service side: public class RegionDto { public RegionDto() { Cities = new EntitySet<CityDto>(); } [Key] public int Id { get; set; } public string Name { get; set; } [Include] [Composition] [Association("RegionDto_CityDto", "Id", "RegionId")] public EntitySet<CityDto> Cities { get; set; } } public class CityDto { [Key] public int Id { get; set; } public int RegionId { get; set; } public string Name { get; set; } } [EnableClientAccess()] public class RegionDomainService : LinqToEntitiesDomainService<RegionEntities> { public IEnumerable<RegionDto> GetRegions() { var regions = (ObjectContext.Regions .Select(x => new RegionDto { Id = x.ID, Name = x.Name })).ToList(); foreach (var region in regions) { var cities = (ObjectContext.Cities .Where(x => x.RegionID == region.Id) .Select(x => new CityDto { Id = x.ID, Name = x.Name })).ToList(); foreach (var city in cities) { region.Cities.Add(city); } } // each region's Cities collection is populated at this point // however when the client receives it, the Cities collections are all empty return regions; } }

    Read the article

  • EF 4.0 : Save Changes Retry Logic

    - by BGR
    Hi, I would like to implement an application wide retry system for all entity SaveChanges method calls. Technologies: Entity framework 4.0 .Net 4.0 namespace Sample.Data.Store.Entities { public partial class StoreDB { public override int SaveChanges(System.Data.Objects.SaveOptions options) { for (Int32 attempt = 1; ; ) { try { return base.SaveChanges(options); } catch (SqlException sqlException) { // Increment Trys attempt++; // Find Maximum Trys Int32 maxRetryCount = 5; // Throw Error if we have reach the maximum number of retries if (attempt == maxRetryCount) throw; // Determine if we should retry or abort. if (!RetryLitmus(sqlException)) throw; else Thread.Sleep(ConnectionRetryWaitSeconds(attempt)); } } } static Int32 ConnectionRetryWaitSeconds(Int32 attempt) { Int32 connectionRetryWaitSeconds = 2000; // Backoff Throttling connectionRetryWaitSeconds = connectionRetryWaitSeconds * (Int32)Math.Pow(2, attempt); return (connectionRetryWaitSeconds); } /// <summary> /// Determine from the exception if the execution /// of the connection should Be attempted again /// </summary> /// <param name="exception">Generic Exception</param> /// <returns>True if a a retry is needed, false if not</returns> static Boolean RetryLitmus(SqlException sqlException) { switch (sqlException.Number) { // The service has encountered an error // processing your request. Please try again. // Error code %d. case 40197: // The service is currently busy. Retry // the request after 10 seconds. Code: %d. case 40501: //A transport-level error has occurred when // receiving results from the server. (provider: // TCP Provider, error: 0 - An established connection // was aborted by the software in your host machine.) case 10053: return (true); } return (false); } } } The problem: How can I run the StoreDB.SaveChanges to retry on a new DB context after an error occured? Something simular to Detach/Attach might come in handy. Thanks in advance! Bart

    Read the article

  • User roles - why not store in session?

    - by Phil
    I'm porting an ASP.NET application to MVC and need to store two items relating to an authenitcated user: a list of roles and a list of visible item IDs, to determine what the user can or cannot see. We've used WSE with a web service in the past and this made things unbelievably complex and impossible to debug properly. Now we're ditching the web service I was looking foward to drastically simplifying the solution simply to store these things in the session. A colleague suggested using the roles and membership providers but on looking into this I've found a number of problems: a) It suffers from similar but different problems to WSE in that it has to be used in a very constrained way maing it tricky even to write tests; b) The only caching option for the RolesProvider is based on cookies which we've rejected on security grounds; c) It introduces no end of complications and extra unwanted baggage; All we want to do, in a nutshell, is store two string variables in a user's session or something equivalent in a secure way and refer to them when we need to. What seems to be a ten minute job has so far taken several days of investigation and to compound the problem we have now discovered that session IDs can apparently be faked, see http://blogs.sans.org/appsecstreetfighter/2009/06/14/session-attacks-and-aspnet-part-1/ I'm left thinking there is no easy way to do this very simple job, but I find that impossible to believe. Could anyone: a) provide simple information on how to make ASP.NET MVC sessions secure as I always believed they were? b) suggest another simple way to store these two string variables for a logged in user's roles etc. without having to replace one complex nightmare with another as described above? Thank you.

    Read the article

< Previous Page | 506 507 508 509 510 511 512 513 514 515 516 517  | Next Page >