Search Results

Search found 6031 results on 242 pages for 'grid infrastructure'.

Page 203/242 | < Previous Page | 199 200 201 202 203 204 205 206 207 208 209 210  | Next Page >

  • SQL Server slow in production environment

    - by Lieven Cardoen
    I have a weird problem in a customer's production environment. I can't give any details on the infrastructure, except that SQL server runs on a virtual server. The data, log and filestream file are on another storage server (data and filestream together and log on a separate server). In our local Test environment, there's one particular query that executes with these durations: first we clear the cache 300ms (First time it takes longer, but from then on it's cached.) 20ms 15ms 17ms In the customer's production environment, the SQL Server is more powerful, these are the durations (I didn't have the rights to clear the cache. Will try this tomorrow). 2500ms 2600ms 2400ms The servers in the customer's production environment are more powerful but they do have virtual servers (we don't). What could be the cause... Not enough memory? Fragmentation? Physical storage? How would you tackle this performance problem? EDIT: Some people have asked me if the data set is equal and it is. I restored their database on our environment. It's true that this was the first thing I looked at. (@Everyone: I added the edit because it will be the first thing that many will think off).

    Read the article

  • Best method of achieving bi-directional communication between Apple iPad "clients" and a Windows Ser

    - by user361910
    We are currently starting to build a client-server system which will see 10 or more Apple iPad client devices communicating to a central Windows server over a wireless LAN. We wanted to some existing plumbing (.NET remoting/WCF/web services/etc) that would allow us to implement a reliable, secure solution without having to start at a low level (e.g. sockets) and recreate the wheel. One of the major requirements that complicates this scenario is that unlike a traditional web service, the windows server needs to be able to arbitrarily notify the clients whenever certain events occur -- so it is not a simple request/response scenario like the web. Initially, we were going to use Windows clients, so our plan was to use the full-duplex mode of .NET WCF over HTTP|TCP. But now using the iPad, we don't have any of the WCF infrastructure. So my question is: what is the best way to allow an iPad and a Windows server to (securely) communicate over a LAN, with each device able to initiate communication to the other? Am I stuck writing low-level socket code? Thanks!

    Read the article

  • Amazon ELB CNAME record not working

    - by BarthQuay
    Hi there, I have set up my EC2 infrastructure behind an ELB instance and by using the ELBs DNS name everything works as expected. Now i wanted to forward a subdomain of my main project domain to the ELBs DNS Name with a CNAME entry. I did this about 12 hours ago and it doesnt seem to work, and i dont know why. The subdomain just cant be resolved. This is the DNS entry which was processed from my DNS provider without errors yesterday: @ IN A 111.111.111.111 localhost IN A 127.0.0.1 mail IN A 111.111.111.111 www IN A 111.111.111.111 ftp IN CNAME www beta IN CNAME myelbnamehere.eu-west-1.elb.amazonaws.com imap IN CNAME www loopback IN CNAME localhost pop IN CNAME www relay IN CNAME www smtp IN CNAME www @ IN MX 10 mail Using nslookup, all the subdomains and main domain gets looked up correctly, but beta.domain.com doesnt. I get "** server can't find beta.domain.com: NXDOMAIN" What am i doing wrong ? Do i need to wait longer ? When i use the ELB DNS name directly everything works as expected. When i do an NSlookup on my providers DNS Server, the CNAME gets resolved, but it looks like any other DNS server cant find the subdomain thanks in advance

    Read the article

  • Prism - loading modules causes activation error

    - by Noich
    I've created a small Prism application (using MEF) with the following parts: - Infrastructure project, that contains a global file of region names. - Main project, that has 2 buttons to load other modules. - ModulesInterfaces project, where I've defined an IDataAccess interface for my DataAccess module. - DataAccess project, where there are several files but one main DataAccess class that implements both IDataAccess and IModule. When the user click a button, I want to show him a list of employees, so I need the DataAccess module. My code is as follows: var namesView = new EmployeesListView(); var namesViewModel = new EmployeesListViewModel(employeeRepository); namesView.DataContext = namesViewModel; namesViewModel.EmployeeSelected += EmployeeSelected; LocateManager(); manager.Regions["ContentRegion"].Add(new NoEmployeeView(), "noEmployee"); manager.Regions["NavigationRegion"].Add(namesView); I get an exception in my code when I'm trying to get DataAccess as a service (via ServiceLocator). I'm not sure if that's the right way to go, but trying to get it from the ModuleCatalog (as a type IDataAccess returns an empty enumeration). I'd appreciate some directions. Thanks. The problematic code line: object instance = ServiceLocator.Current.GetInstance(typeof(DataAccess.DataAccess)); The exact exception: Activation error occured while trying to get instance of type DataAccess, key "" The inner exception is null, the stack trace: at Microsoft.Practices.Prism.MefExtensions.MefServiceLocatorAdapter.DoGetInstance(Type serviceType, String key) in c:\release\WorkingDir\PrismLibraryBuild\PrismLibrary\Desktop\Prism.MefExtensions\MefServiceLocatorAdapter.cs:line 76 at Microsoft.Practices.ServiceLocation.ServiceLocatorImplBase.GetInstance(Type serviceType, String key) Edit The same exception is thrown for: container = (CompositionContainer)ServiceLocator.Current.GetInstance(typeof(CompositionContainer));

    Read the article

  • Perl XML SAX parser emulating XML::Simple record for record

    - by DVK
    Short Q summary: I am looking a fast XML parser (most likely a wrapper around some standard SAX parser) which will produce per-record data structure 100% identical to those produced by XML::Simple. Details: We have a large code infrastructure which depends on processing records one-by-one and expects the record to be a data structure in a format produced by XML::Simple since it always used XML::Simple since early Jurassic era. An example simple XML is: <root> <rec><f1>v1</f1><f2>v2</f2></rec> <rec><f1>v1b</f1><f2>v2b</f2></rec> <rec><f1>v1c</f1><f2>v2c</f2></rec> </root> And example rough code is: sub process_record { my ($obj, $record_hash) = @_; # do_stuff } my $records = XML::Simple->XMLin(@args)->{root}; foreach my $record (@$records) { $obj->process_record($record) }; As everyone knows XML::Simple is, well, simple. And more importantly, it is very slow and a memory hog - due to being a DOM parser and needing to build/store 100% of data in memory. So, it's not the best tool for parsing an XML file consisting of large amount of small records record-by-record. However, re-writing the entire code (which consist of large amount of "process_record"-like methods) to work with standard SAX parser seems like an big task not worth the resources, even at the cost of living with XML::Simple. What I'm looking for is an existing module which will probably be based on a SAX parser (or anything fast with small memory footprint) which can be used to produce $record hashrefs one by one based on the XML pictured above that can be passed to $obj->process_record($record) and be 100% identical to what XML::Simple's hashrefs would have been. I don't care much what the interface of the new module is - e.g whether I need to call next_record() or give it a callback coderef accepting a record.

    Read the article

  • what's the best way, using the TFS API, to add a label to a file?

    - by jcollum
    I'd like to add a label to a set of files using the TFS API. My code looks like this: VersionControlLabel label = new VersionControlLabel(this.vcServer, this.label, this.vcServer.AuthenticatedUser, this.labelScopeDirectory, this.labelComment); List<LabelItemSpec> labelSpecs = new List<LabelItemSpec>(); // iterate files and versions foreach (var fileAndVersion in this.filesAndVersions) { VersionSpec vs = null; Item i = null; // i have no idea why the itemspec is needed instead of the item and version... ItemSpec iSpec = new ItemSpec("{0}/{1}".FormatString(this.source, fileAndVersion.Key), RecursionType.None); GetItemAndVersionSpec(fileAndVersion.Key, fileAndVersion.Value, out vs, out i); labelSpecs.Add(new LabelItemSpec(iSpec, vs, false)); } this.vcServer.CreateLabel(label, labelSpecs.ToArray(), LabelChildOption.Merge); (there are some extension methods in there... this is all largely lifted from this blog post) The thing that concerns me is things like this in the MSDN docs: This enumeration supports the .NET Framework infrastructure and is not intended to be used directly from your code. So MSDN is telling me not to use that enumeration (LabelChildOption), which is afaik the only way to create a label and add it to a file. Is there a better way? Is this sort of a "grey" area in the TFS API?

    Read the article

  • "jpeglib.h: No such file or directory" ghostscript port in OPENBSD

    - by holms
    Hello I have a problem with compiling a ghostscript from ports in openbsd 4.7. SO i have jpeg-7 installed, I have latest port tree for obsd4.7. ===> Building for ghostscript-8.63p11 mkdir -p /usr/ports/pobj/ghostscript-8.63p11/ghostscript-8.63/obj gmake LDFLAGS='-L/usr/local/lib -shared' GS_XE=./obj/../obj/libgs.so.11.0 STDIO_IMPLEMENTATION=c DISPLAY_DEV=./obj/../obj/display.dev BINDIR=./obj/../obj GLGENDIR=./obj/../obj GLOBJDIR=./obj/../obj PSGENDIR=./obj/../obj PSOBJDIR=./obj/../obj CFLAGS='-O2 -fno-reorder-blocks -fno-reorder-functions -fomit-frame-pointer -march=i386 -fPIC -Wall -Wstrict-prototypes -Wmissing-declarations -Wmissing-prototypes -fno-builtin -fno-common -DGS_DEVS_SHARED -DGS_DEVS_SHARED_DIR=\"/usr/local/lib/ghostscript/8.63\"' prefix=/usr/local ./obj/../obj/gsc gmake[1]: Entering directory `/usr/ports/pobj/ghostscript-8.63p11/ghostscript-8.63' cc -I./obj/../obj -I./src -DHAVE_MKSTEMP -O2 -fno-reorder-blocks -fno-reorder-functions -fomit-frame-pointer -march=i386 -fPIC -Wall -Wstrict-prototypes -Wmissing-declarations -Wmissing-prototypes -fno-builtin -fno-common -DGS_DEVS_SHARED -DGS_DEVS_SHARED_DIR=\"/usr/local/lib/ghostscript/8.63\" -DGX_COLOR_INDEX_TYPE='unsigned long long' -o ./obj/../obj/sdctc.o -c ./src/sdctc.c In file included from src/sdctc.c:17: obj/jpeglib_.h:1:21: jpeglib.h: No such file or directory In file included from src/sdctc.c:19: src/sdct.h:58: error: field `err' has incomplete type src/sdct.h:70: error: field `err' has incomplete type src/sdct.h:72: error: field `cinfo' has incomplete type src/sdct.h:73: error: field `destination' has incomplete type src/sdct.h:84: error: field `err' has incomplete type src/sdct.h:87: error: field `dinfo' has incomplete type src/sdct.h:88: error: field `source' has incomplete type gmake[1]: *** [obj/../obj/sdctc.o] Error 1 gmake[1]: Leaving directory `/usr/ports/pobj/ghostscript-8.63p11/ghostscript-8.63' gmake: *** [so] Error 2 *** Error code 2 Stop in /usr/ports/print/ghostscript/gnu (line 2225 of /usr/ports/infrastructure/mk/bsd.port.mk). I tried to place one more param in CFLAGS in Makefile with value "-I/usr/local" but no luck =( People in irc [freenode server, #openbsd channel] refuses give any help for ports at all, and even more - because this is 4.7 unstable version. I have my reasons to use this version and ports believe me =) CFLAGS+= -DSYS_TYPES_HAS_STDINT_TYPES \ -I${LOCALBASE}/include \ -I${LOCALBASE}/include/ijs \ -I${LOCALBASE}/include/libpng \

    Read the article

  • Best Practices for Setup and Management of an Open Source Project

    - by VirtuosiMedia
    Later this year I want to release a PHP framework that I've been working on as open source. I do use source control (SVN), but it's on an extremely limited basis. I'm self-taught, I develop by myself and don't have the experience of working with large teams. I have some ideas about what can help make a project successful, but I'm fuzzy on some of the details. Since it's not yet released, I want to do everything I can to set up the right infrastructure from the beginning. What do I need to know in order to setup and manage a successful project? Some ideas that I have to make it successful (beyond marketing it): Good documentation and tutorials Automated unit tests and builds to push update to the website A clear roadmap Bug Tracking integrated with the source control A style guide to keep the code consistent along with clear A forum for the community to get support, share ideas, etc. A good example application built with the framework A blog to keep the community informed Maintaining backwards compatibility wherever possible Some of my questions: How do I setup and automate a one step submit-test-commit-generate API docs-push update to website process? How do I handle (technically) submissions from other users? How can I ensure that those submissions must be approved before being integrated? What are some of the pitfalls that can be avoided in terms of the project community? I'd prefer to have it be as friendly and helpful as possible without a lot of drama. I'd love to learn from your experience on any of these points. If you think I'm missing anything big, please share that as well. Any resources (preferably geared toward a beginner) that you could point me towards would also be greatly appreciated.

    Read the article

  • Unusual request URL in ASP.NET health monitoring event

    - by Troy Hunt
    I’m seeing a rather strange occurrence in the request information section of an ASP.NET health monitoring email I hope someone can shed some light on. This is a publicly facing website which runs on infrastructure at an Indian hosting provider. Health monitoring is notifying us of server errors via automated email but every now and then the requested URL appears as a totally different website. For example: Request information: Request URL: http://www.baidu.com/Default.aspx Request path: /Default.aspx User host address: 221.13.128.175 User: Is authenticated: False Authentication Type: Thread account name: NT AUTHORITY\NETWORK SERVICE Obviously the site in question is not Baidu and obviously this attribute is not the referrer either; the “Request URL” value is the path which has generated the error. The IP address is located in Beijing (coincidental given the Baidu address?) and in this instance it looks like the SQL server backend was not accessible (I haven't included the entire error message for security's sake). What would cause the request URL attribute to be arbitrarily changed to that of another site? I’ve never seen this occur in a health monitoring event before. Thanks!

    Read the article

  • Reduce number of config files to as few as possible

    - by Scott
    For most of my applications I use iBatis.Net for database access/modeling and log4Net for logging. In doing this, I need a number of *.config files for each project. For example, for a simple application I need to have the following *.config files: app.config ([AssemblyName].[Extention].config) [AssemblyName].SqlMap.config [AssemblyName].log4Net.config [AssemblyName].SqlMapProperties.config providers.config When these applications go from DEV to TEST to PRODUCTION environments, the settings contained in these files change depending on the environment. When the number of files get compounded by having 5-10 (or more) supporting executables per project, the work load on the infrastructure team (the ones doing the roll-outs to the different environments) gets rather high. We also have a high risk of one of the config files being missed, or a mistype in the config file. What is the best way to avoid these risks? Should I combine all of the config files into one file? (is that possible with iBatis?) I know that with VisualStudio 2010 they introduce transforms for these config files that allow the developer to setup all the settings for the different environments and then dynamically (depending on the build kicked off) the config files get updated to the correct versions. (VS 2010 - transforms) Thank you for any help that you can provide.

    Read the article

  • Pros & Cons of Google App Engine

    - by Rishi
    Pros & Cons of Google App Engine [An Updated List 21st Aug 09] Help me Compile a List of all the Advantages & Disadvantages of Building an Application on the Google App Engine Pros: 1) No Need to buy Servers or Server Space (no maintenance). 2) Makes solving the problem of scaling much easier. Cons: 1) Locked into Google App Engine ?? 2)Developers have read-only access to the filesystem on App Engine. 3)App Engine can only execute code called from an HTTP request (except for scheduled background tasks). 4)Users may upload arbitrary Python modules, but only if they are pure-Python; C and Pyrex modules are not supported. 5)App Engine limits the maximum rows returned from an entity get to 1000 rows per Datastore call. 6)Java applications may only use a subset (The JRE Class White List) of the classes from the JRE standard edition. 7)Java applications cannot create new threads. Known Issues!! http://code.google.com/p/googleappengine/issues/list Hard limits Apps per developer - 10 Time per request - 30 sec Files per app - 3,000 HTTP response size - 10 MB Datastore item size - 1 MB Application code size - 150 MB Pro or Con? App Engine's infrastructure removes many of the system administration and development challenges of building applications to scale to millions of hits. Google handles deploying code to a cluster, monitoring, failover, and launching application instances as necessary. While other services let users install and configure nearly any *NIX compatible software, App Engine requires developers to use Python or Java as the programming language and a limited set of APIs. Current APIs allow storing and retrieving data from a BigTable non-relational database; making HTTP requests; sending e-mail; manipulating images; and caching. Most existing Web applications can't run on App Engine without modification, because they require a relational database.

    Read the article

  • Perl XML SAX parser emulating XML::Simple record for record

    - by DVK
    Short Q summary: I am looking a fast XML parser (most likely a wrapper around some standard SAX parser) which will produce per-record data structure 100% identical to those produced by XML::Simple. Details: We have a large code infrastructure which depends on processing records one-by-one and expects the record to be a data structure in a format produced by XML::Simple since it always used XML::Simple since early Jurassic era. An example simple XML is: <root> <rec><f1>v1</f1><f2>v2</f2></rec> <rec><f1>v1b</f1><f2>v2b</f2></rec> <rec><f1>v1c</f1><f2>v2c</f2></rec> </root> And example rough code is: sub process_record { my ($obj, $record_hash) = @_; # do_stuff } my $records = XML::Simple->XMLin(@args)->{root}; foreach my $record (@$records) { $obj->process_record($record) }; As everyone knows XML::Simple is, well, simple. And more importantly, it is very slow and a memory hog - due to being a DOM parser and needing to build/store 100% of data in memory. So, it's not the best tool for parsing an XML file consisting of large amount of small records record-by-record. However, re-writing the entire code (which consist of large amount of "process_record"-like methods) to work with standard SAX parser seems like an big task not worth the resources, even at the cost of living with XML::Simple. What I'm looking for is an existing module which will probably be based on a SAX parser (or anything fast with small memory footprint) which can be used to produce $record hashrefs one by one based on the XML pictured above that can be passed to $obj->process_record($record) and be 100% identical to what XML::Simple's hashrefs would have been.

    Read the article

  • Configure IIS7 to server static content through ASP.NET Runtime

    - by Anton Gogolev
    I searched high an low and still cannot find a definite answer. How do I configure IIS 7.0 or a Web Application in IIS so that ASP.NET Runtime will handle all requests -- including ones to static files like *.js, *.gif, etc? What I'm trying to do is as follows. We have kind of SaaSy site, which we can "skin" for every customer. "Skinnig" means developing a custom master page and using a bunch of *.css and other images. Quite naturally, I'm using VirtualPathProvider, which operates like this: public override System.Web.Hosting.VirtualFile GetFile(string virtualPath) { if(PhysicalFileExists(virtualPath)) { var virtualFile = base.GetFile(virtualPath); return virtualFile; } if(VirtualFileExists(virtualPath)) { var brandedVirtualPath = GetBrandedVirtualPath(virtualPath); var absolutePath = HttpContext.Current.Server.MapPath(brandedVirtualPath); Trace.WriteLine(string.Format("Serving '{0}' from '{1}'", brandedVirtualPath, absolutePath), "BrandingAwareVirtualPathProvider"); var virtualFile = new VirtualFile(brandedVirtualPath, absolutePath); return virtualFile; } return null; } The basic idea is as follows: we have a branding folder inside our webapp, which in turn contains folders for each "brand", with "brand" being equal to host name. That is, requests to http://foo.example.com/ should use static files from branding/foo_example_com, whereas http://bar.example.com/ should use content from branding/bar_example_com. Now what I want IIS to do is to forward all requests to static files to StaticFileHandler, which would then use this whole "infrastructure" and serve correct files. However, try as I might, I cannot configure IIS to do this.

    Read the article

  • Maintain order of messages via proxies to app servers

    - by David Turner
    Hi, I am receiving messages from a 3rd party via a http post, and it is important that the order the messages hit our infrastructure is maintained through the load balancers and proxies until it hits our application server. Quick Diagram. (proxies in place due to security requirements.) [ACE load balancer] - [2 proxies] - [Application Servers] or maybe [ACE load balancer] - [2 proxies] - [ACE load balancer] - [Application Servers] My idea was that I would setup the load balancers in active-passive mode, to force all messages to use one proxy, and then both the proxies would hit a second load balancer that would be configured in active passive to hit one application server. Whilst the above is not ideal, it does give me resilience, and once the message is in my app servers, I enter a stateless world, and load balance across both nodes of my cluster. However, I am concerned that even a single proxy could send messages out of order, perhaps if 2 messages are recived very close together, message 2 might get processed faster than message 1. Is this possible? Likely? Is there a simple open source proxy (MOD_PROXY?) that can be easily configured to just pass messages through it, and to guarantee to send the messages through in the order they are received. If so which, and finally links to how I should configure it would be great. In fact any links to articles around avoiding "out of order" messages using hardware would be gratefully received. Thanks, ps for those that are interested, the app is a java spring integration application currently on a appliation server.

    Read the article

  • java.net.SocketException: Software caused connection abort: recv failed; Causes and cures?

    - by IVR Avenger
    Hi, all. I've got an application running on Apache Tomcat 5.5 on a Win2k3 VM. The application serves up XML to be consumed by some telephony appliances as part of our IVR infrastructure. The application, in turn, receives its information from a handful of SOAP services. This morning, the SOAP services were timing out intermittently, causing all sorts of Exceptions. Once these stopped, I noticed that our application was still performing very slowly, in that it took it a long time to render and deliver pages. This sluggishness was noticed both on the appliances that consume the Tomcat output, and from a simple test of requesting some static documents from my web browser. Restarting Tomcat immediately resolved the issue. Cracking open the localhost log, I see a ton of these errors, right up until I restarted Tomcat: WARNING: Exception thrown whilst processing POSTed parameters java.net.SocketException: Software caused connection abort: recv failed After a big of Googling, my working theory is that the SOAP issue caused my users to get errors, which caused them to make more requests, which put an increased load on the application. This caused it to run out of available sockets to handle incoming requests. So, here's my quandary: 1. Is this a valid hypothesis, or am I just in over my head with HTTP and Tomcat? 2. If this is a valid hypothesis, is there a way to increase the size of the "socket queue", so that this doesn't happen in the future? Thanks! IVR Avenger

    Read the article

  • How can I setup Hudson to use the same repository for different projects and maintain separate chang

    - by Allen
    I typically setup SVN to host 1 big project per repository but a lot of our infrastructure has changed and we now have one main SVN server that has a hierarchy like so Branches Tags Trunk Project1 files & folders Project2 files & folders Project3 files & folders Projects1,2, and 3 do not share anything amongst themselves, they are independent projects each with their own solution file to be built. I can setup projects in Hudson like so Repository Url: http://server/svn/MainRepository Local module directory (optional): /Trunk/Project1 And that will maintain a separate workspace for each project, but every time you commit to Project 2 or Project 3, a build gets kicked off in Hudson for every project based in that repository. Also, any commit made anywhere in the repository is pulled down and inserted into the Hudson changelog for all of them. I know the easiest solution would be to simply separate every project into its own repository. However, if I couldn't do that due to various reasons, is there a feasible way to achieve the functionality that having separate repositories gets me? I want commits to the sub folder of project 1 to only affect project 1. No other project's commits should cause project 1 to build and project 1's changelog in Hudson should only have commit notes from project 1.

    Read the article

  • ASP.NET MVC and NHibernate coupling

    - by Ben
    I have just started learning NHibernate. Over the past few months I have been using IoC / DI (structuremap) and the repository pattern and it has made my applications much more loosely coupled and easier to test. When switching my persistence layer to NHibernate I decided to stick with my repositories. Currently I am creating a new session on each method call but of course this means that I can not benefit from lazy loading. Therefore I wish to implement session-per-request but in doing so this will make my web project dependent on NHibernate (perhaps this is not such a bad thing?). I was planning to inject ISession into my repositories and create and dispose sessions on beginrequest/endrequest events (see http://ayende.com/Blog/archive/2009/08/05/do-you-need-a-framework.aspx) Is this a good approach? Presumably I cannot use session-per-request without having a reference to NHibernate in my web project? Having the web project dependent on NHibernate prompts my next (few) questions - why even bother with the repository? Since my web app is calling services that talk to the repositories, why not ditch the repositories and just add my NHibernate persistance code inside the services? And finally, is there really any need to split out into so many projects. Is a web project and an infrastructure project sufficient? I realise that I have veered off a bit from my original question but it seems that everyone seems to have their own opinion on these topics. Some people use the repository pattern with NHibernate, some don't. Some people stick their mapping files with the related classes, others have a separate project for this. Many thanks, Ben

    Read the article

  • How do I implement repository pattern and unit of work when dealing with multiple data stores?

    - by Jason
    I have a unique situation where I am building a DDD based system that needs to access both Active Directory and a SQL database as persistence. Initially this wasnt a problem because our design was setup where we had a unit of work that looked like this: public interface IUnitOfWork { void BeginTransaction() void Commit() } and our repositories looked like this: public interface IRepository<T> { T GetByID() void Save(T entity) void Delete(T entity) } In this setup our load and save would handle the mapping between both data stores because we wrote it ourselves. The unit of work would handle transactions and would contain the Linq To SQL data context that the repositories would use for persistence. The active directory part was handled by a domain service implemented in infrastructure and consumed by the repositories in each Save() method. Save() was responsible with interacting with the data context to do all the database operations. Now we are trying to adapt it to entity framework and take advantage of POCO. Ideally we would not need the Save() method because the domain objects are being tracked by the object context and we would just need to add a Save() method on the unit of work to have the object context save the changes, and a way to register new objects with the context. The new proposed design looks more like this: public interface IUnitOfWork { void BeginTransaction() void Save() void Commit() } public interface IRepository<T> { T GetByID() void Add(T entity) void Delete(T entity) } This solves the data access problem with entity framework, but does not solve the problem with our active directory integration. Before, it was in the Save() method on the repository, but now it has no home. The unit of work knows nothing other than the entity framework data context. Where should this logic go? I argue this design only works if you only have one data store using entity framework. Any ideas how to best approach this issue? Where should I put this logic?

    Read the article

  • Transitive SQL query on same table

    - by MiKu
    Hey. consider d following table and data... in_timestamp | out_timestamp | name | in_id | out_id | in_server | out_server | status timestamp1 | timestamp2 | data1 |id1 | id2 | others-server1 | my-server1 | success timestamp2 | timestamp3 | data1 | id2 | id3 | my-server1 | my-server2 | success timestamp3 | timestamp4 | data1 | id3 | id4 | my-server2 | my-server3 | success timestamp4 | timestamp5 | data1 | id4 | id5 | my-server3 | others-server2 | success the above data represent log of a execution flow of some data across servers. e.g. some data has flowed from some 'outside-server1' to bunch of 'my-servers' and finally to destined 'others-server2'. Question : 1) I need to give this log in representable form to client where he doesn't need to know anything about the bunch of 'my-servers'. All i am supposed to give is timestamp of the data entered my infrastructure and when it left; drilling down to following info. in_timestamp (of 'others_server1' to 'my-server1') out_timestamp (of 'my-server3' to 'others-server2') name status I want to write sql for the same! Can someone help? NOTE : there might not be 3 'my-servers' all the time. It differs from situation to situation. e.g. there might be 4 'my-server' involved for, say, data2! 2) Are there any other alternatives to SQL? I mean stored procs/etc? 3) Optimizations? (The records are huge in number! As of now, it is around 5 million a day. And we are supposed to show records that are upto a week old.) In advance, THANKS FOR THE HELP! :)

    Read the article

  • SEO: A whois server that work for .SE domains?

    - by Niels Bosma
    I'm developing a small domain checker and I can't get .SE to work: public string Lookup(string domain, RecordType recordType, SeoToolsSettings.Tld tld) { TcpClient tcp = new TcpClient(); tcp.Connect(tld.WhoIsServer, 43); string strDomain = recordType.ToString() + " " + domain + "\r\n"; byte[] bytDomain = Encoding.ASCII.GetBytes(strDomain.ToCharArray()); Stream s = tcp.GetStream(); s.Write(bytDomain, 0, strDomain.Length); StreamReader sr = new StreamReader(tcp.GetStream(), Encoding.ASCII); string strLine = ""; StringBuilder builder = new StringBuilder(); while (null != (strLine = sr.ReadLine())) { builder.AppendLine(strLine); } tcp.Close(); if (tld.WhoIsDelayMs > 0) System.Threading.Thread.Sleep(tld.WhoIsDelayMs); return builder.ToString(); } I've tried whois servers whois.nic-se.se and whois.iis.se put I keep getting: # Copyright (c) 1997- .SE (The Internet Infrastructure Foundation). # All rights reserved. # The information obtained through searches, or otherwise, is protected # by the Swedish Copyright Act (1960:729) and international conventions. # It is also subject to database protection according to the Swedish # Copyright Act. # Any use of this material to target advertising or # similar activities is forbidden and will be prosecuted. # If any of the information below is transferred to a third # party, it must be done in its entirety. This server must # not be used as a backend for a search engine. # Result of search for registered domain names under # the .SE top level domain. # The data is in the UTF-8 character set and the result is # printed with eight bits. "domain google.se" not found. Edit: I've tried changing to UTF8 with no other result. When I try using whois from sysinternals I get the correct result, but not with my code, not even using SE.whois-servers.net. /Niels

    Read the article

  • Is there a way to avoid spaghetti code over the years?

    - by Yoni Roit
    I've had several programming jobs. Each one with 20-50 developers, project going on for 3-5 years. Every time it's the same. Some programmers are bright, some are average. Everyone has their CS degree, everyone read design patterns. Intentions are good, people are trying hard to write good code but still after a couple of years the code turns into spaghetti. Changes in module A suddenly break module B. There are always these parts of code that no one can understand except for the person who wrote it. Changing infrastructure is impossible and backwards compatibility issues prevent good features to get in. Half of the time you just want to rewrite everything from scratch. And people more experienced than me treat this as normal. Is it? Does it have to be? What can I do to avoid this or should I accept it as a fact of life? Edit: Guys, I am impressed with the amount and quality of responses here. This site and its community rock!

    Read the article

  • razor websites not working and all dlls are present

    - by Michael Tot Korsgaard
    I've uploaded a .cshtml website to a surftown server, and I got some problems running the code. But I have a problem with it running the Razor code. This is how the page renders:(Default.cshtml) I've already checked for internal communication problems. And this is my result: But then why isn't it working, and how can I fix it? I've heard that it can be a problem with views but how whould I fix this if that's the case? My websites folder tree: (And some files too) App_Code App_Data packages Microsoft.AspNet.Razor.2.0.20710.0 Microsoft.Asp.Net.WebPages.2.0.20710.0 Microsoft.Asp.Net.WebPages.Administration.2.0.20710.0 Microsoft.Asp.Net.WebPages.Data.2.0.20710.0 Microsoft.Asp.Net.WebPages.WebData.2.0.20710.0 Microsoft.Web.Infrastructure.1.0.0.0 NuGet.Core.1.6.2 bin packages jQuery.2.0.3 Content Scripts Tools Microsoft.AspNet.Mvc.4.0.30506.0 lib net40 Microsoft.AspNet.Razor.2.0.30506.0 lib net40 Microsoft.AspNet.WebPages.2.0.30506.0 lib net40 Pages Chapters Read.cshtml Edit Move Chapter.cshtml Entry.cshtml Entries EnterEntry.cshtml EnterNote.cshtml Login Login.cshtml Search Result.cshtml Scripts Addons TinyMCE Styles CSS Views _Layout.cshtml Default.cshtml My web.config file looks like this: <?xml version="1.0"?> <configuration> <system.web> <compilation debug="true" targetFramework="4.0"> <buildProviders> <add extension=".cshtml" type="System.Web.WebPages.Razor.RazorBuildProvider, System.Web.WebPages.Razor"/> </buildProviders> <assemblies> <add assembly="System.Web.Mvc, Version=4.0.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"/> </assemblies> </compilation> </system.web> <connectionStrings> <add connectionString="database connection" providerName="System.Data.SqlClient"/> </connectionStrings> </configuration> EDIT: Is it a problem that all my files are .cshtml?

    Read the article

  • How to configure database connection securely

    - by chiccodoro
    Similar but not the same: How to securely store database connection details Securely connecting to database within a application Hi all, I have a C# WinForms application connecting to a database server. The database connection string, including a generic user/pass, is placed in a NHibernate configuration file, which lies in the same directory as the exe file. Now I have this issue: The user that runs the application should not get to know the username/password of the general database user because I don't want him to rummage around in the database directly. Alternatively I could hardcode the connection string, which is bad because the administrator must be able to change it if the database is moved or if he wants to switch between dev/test/prod environments. So long I've found three possibilities: The first referenced question was generally answered by making the file only readable for the user that runs the application. But that's not not enough in my case (the user running the application is a person. The database user/pass are general and shouldn't even be accessible by the person.) The first answer additionally proposed to encrypt the connection data before writing it to the file. With this approach, the administrator is not able anymore to configure the connection string because he cannot encrypt it by hand. The second referenced question provides an approach for this very scenario but it seems very complicated. My questions to you: This is a very general issue, so isn't there any general "how-to-do-it" way, somehow a "design pattern"? Is there some support in .NET's config infrastructure? (optional, maybe out of scope) Can I combine that easily with the NHibernate configuration mechanism?

    Read the article

  • Which tools to use and how to find file descriptors leaking from Glassfish?

    - by cclark
    We release new code to production every week and Glassfish hasn't had any problems. This weekend we had to move racks at our hosting provider. There were not any code changes (they just powered off, moved, re-racked and powered on) but we're on a new network infrastructure and suddenly we're leaking file descriptors like a sieve. So I'm guessing there is some sort of connection attempting to be made which now fails due to a network change. I'm running Glassfish v2ur2-b04/AS9.1_02 on RHEL4 with an embedded IMQ instance. After the move I started seeing: [#|2010-04-25T05:34:02.783+0000|SEVERE|sun-appserver9.1|javax.enterprise.system.container.web|_ThreadID=33;_ThreadName=SelectorThread-?4848;_RequestID=c4de6f6d-c1d6-416d-ac6e-49750b1a36ff;|WEB0756: Caught exception during HTTP processing. java.io.IOException: Too many open files at sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) ... [#|2010-04-25T05:34:03.327+0000|WARNING|sun-appserver9.1|javax.enterprise.system.stream.err|_ThreadID=34;_ThreadName=Timer-1;_RequestID=d27e1b94-d359-4d90-a6e3-c7ec49a0f383;|java.lang.NullPointerException at com.sun.jbi.management.system.AutoAdminTask.pollAutoDirectory(AutoAdminTask.java:1031) Using lsof I check the number of file descriptors and I see quite a few entries which look like: java 18510 root 8556u sock 0,4 1555182 can't identify protocol java 18510 root 8557u sock 0,4 1555320 can't identify protocol java 18510 root 8558u sock 0,4 1555736 can't identify protocol java 18510 root 8559u sock 0,4 1555883 can't identify protocol If I do a count of open file descriptors every minute I see it growing by 12 every minute. I have no idea what these sockets are. I've undeployed my application so there is only a plain Glassfish instance running and I still see it leaking 12 file descriptors a minute. So I think this leak is in Glassfish or potentially IMQ. What approach should I take to tracking down these sockets of unknown protocol? What tools can I use (or flags can I pass to lsof) to get more information about where to look? thanks, chuck

    Read the article

  • PHP cURL error: "Empty reply from server"

    - by ABach
    All, I have a class function to interface with the RESTful API for Last.FM - its purpose is to grab the most recent tracks for my user. Here it is: private static $base_url = 'http://ws.audioscrobbler.com/2.0/'; public static function getTopTracks($options = array()) { $options = array_merge(array( 'user' => 'bachya', 'period' => NULL, 'api_key' => 'xxxxx...', // obfuscated, obviously ), $options); $options['method'] = 'user.getTopTracks'; // Initialize cURL request and set parameters $ch = curl_init(); curl_setopt_array($ch, array( CURLOPT_URL => self::$base_url, CURLOPT_POST => TRUE, CURLOPT_POSTFIELDS => $options, CURLOPT_RETURNTRANSFER => TRUE, CURLOPT_TIMEOUT => 30, CURLOPT_USERAGENT => 'Mozilla/4.0 (compatible; MSIE 5.01; Windows NT 5.0)' )); $results = curl_exec($ch); return $results; } This returns "Empty reply from server". I know that some have suggested that this error comes from some fault in network infrastructure; I do not believe this to be true in my case. If I run a cURL request through the command line, I get my data; the Last.FM service is up and accessible. Before I go to those folks and see if anything has changed, I wanted to check with you fine folks and see if there's some issue in my code that would be causing this. Thanks!

    Read the article

< Previous Page | 199 200 201 202 203 204 205 206 207 208 209 210  | Next Page >