Search Results

Search found 16023 results on 641 pages for 'knowledge management'.

Page 573/641 | < Previous Page | 569 570 571 572 573 574 575 576 577 578 579 580  | Next Page >

  • Why is cell phone software is still so primitive?

    - by Tomislav Nakic-Alfirevic
    I don't do mobile development, but it strikes me as odd that features like this aren't available by default on most phones: full text search: searches all address book contents, messages, anything else being a plus better call management: e.g. a rotating audio call log, meaning you always have the last N calls recorded for your listening pleasure later (your little girl just said her first "da-da" while you were on a business trip, you had a telephone job interview, you received complex instructions to do something etc.) bluetooth remote control (like e.g. anyRemote, but available by default on a bluetooth phone) no multitasking capabilities worth mentioning and in general no e.g. weekly software updates, making the phone much more usable (even if it had to be done over USB, rather than over the network). I'm sure I was dumbfounded by the lack or design of other features as well, but they don't come to mind right now. To clarify, I'm not talking about smartphones here: my plain, 2-year old phone has a CPU an order of magnitude faster than my first PC, about as much storage space and it's ridiculous how bad (slow, unwieldy) the software is and it's not one phone or one manufacturer. What keeps the (to me) obvious software functionality vacuum on a capable hardware platform from being filled up?

    Read the article

  • Minimum privileges to read SQL Jobs using SQL SMO

    - by Gustavo Cavalcanti
    I wrote an application to use SQL SMO to find all SQL Servers, databases, jobs and job outcomes. This application is executed through a scheduled task using a local service account. This service account is local to the application server only and is not present in any SQL Server to be inspected. I am having problems getting information on job and job outcomes when connecting to the servers using a user with dbReader rights on system tables. If we set the user to be sysadmin on the server it all works fine. My question to you is: What are the minimum privileges a local SQL Server user needs to have in order to connect to the server and inspect jobs/job outcomes using the SQL SMO API? I connect to each SQL Server by doing the following: var conn = new ServerConnection { LoginSecure = false, ApplicationName = "SQL Inspector", ServerInstance = serverInstanceName, ConnectAsUser = false, Login = user, Password = password }; var smoServer = new Server (conn); I read the jobs by reading smoServer.JobServer.Jobs and read the JobSteps property on each of these jobs. The variable server is of type Microsoft.SqlServer.Management.Smo.Server. user/password are of the user found in each SQL Server to be inspected. If "user" is SysAdmin on the SQL Server to be inspected all works ok, as well as if we set ConnectAsUser to true and execute the scheduled task using my own credentials, which grants me SysAdmin privileges on SQL Server per my Active Directory membership. Thanks!

    Read the article

  • Configuring IHS server to direct traffic to the Netty component bound to a port

    - by rbot
    I have a Server Component ( based on Jboss-Netty, which could maintain & handle persistent connections ) deployed in WAS. This component when deployed & initiated within the WAS environment, binds to a port & listens for incoming HTTP connection. [ Why i had to deploy a Netty HTTP Server within WAS is another story - management requirement !! Netty is deployed in WAS as a spring bean which when initiated runs on a port in the machine, independent of WAS ] Clients (mobile app) were able to establish persistent HTTP connections (to the above URL::Port) with this netty component & send/receive requests. Now, I have to replicate this feature in our Production Environment where a IHS Server (Web Server) which sits before the WAS. What i expected is to get a IHS URL which could redirect the incoming packets to the specific PORT on WAS, so that the Client apps can establish a similar persistent http connection. Our Server Admin tried a few combinations and we are not able to identify how to proceed further on this. Your expert ideas would be highly appreciated.

    Read the article

  • Retrieving Json via HTML request from Jboss server

    - by Seth Solomon
    I am running into a java.net.SocketException: Unexpected end of file from server when I am trying to query some JSON from my JBoss server. I am hoping someone can spot where I am going wrong. Or does anyone have any suggestions of a better way to pull this JSON from my Jboss server? try{ URL u = new URL("http://localhost:9990/management/subsystem/datasources/data-source/MySQLDS/statistics?read-resource&include-runtime=true&recursive=true"); HttpURLConnection c = (HttpURLConnection) u.openConnection(); String encoded = Base64.encode(("username"+":"+"password").getBytes()); c.setRequestMethod("POST"); c.setRequestProperty("Authorization", "Basic "+encoded); c.setRequestProperty("Content-Type","application/json"); c.setUseCaches(false); c.setAllowUserInteraction(false); c.setConnectTimeout(5000); c.setReadTimeout(5000); c.connect(); int status = c.getResponseCode(); // throws the exception here switch (status) { case 200: case 201: BufferedReader br = new BufferedReader(new InputStreamReader(c.getInputStream())); StringBuilder sb = new StringBuilder(); String line; while ((line = br.readLine()) != null) { sb.append(line+"\n"); } br.close(); System.out.println(sb.toString()); break; default: System.out.println(status); break; } } catch (Exception e) { e.printStackTrace(); }

    Read the article

  • Is their a definitive list for the differences between the current version of SQL Azure and SQL Serv

    - by Aim Kai
    I am a relative newbie when it comes to SQL Azure!! I was wondering if there was a definitive list somewhere regarding what is and is not supported by SQL Azure in regards to SQL Server 2008? I have had a look through google but I've noticed some of the blog posts are missing things which I have found through my own testing: For example, quite a lot is summarised in this blog entry http://www.keepitsimpleandfast.com/2009/12/main-differences-between-sql-azure-and.html Common Language Runtime (CLR) Database file placement Database mirroring Distributed queries Distributed transactions Filegroup management Global temporary tables Spatial data and indexes SQL Server configuration options SQL Server Service Broker System tables Trace Flags which is a repeat of the MSDN page http://msdn.microsoft.com/en-us/library/ff394115.aspx I've noticed from my own testing that the following seem to have issues when migrating from SQL Server 2008 to the Azure: XML Types (the msdn does mention large custom types - I guess it may include this?? even if the data schema is really small?) Multi-part views I've been using SQL Azure Migration Wizard v3.1.8 to migrate local databases into the cloud. I was wondering if anyone could point to a list or give me any information till when these features are likely to be included in SQL Azure.

    Read the article

  • CouchDB, HDFS, HBase or which is right for my situation?

    - by Lucas
    Hello all, This question is regarding data storage systems such as CouchDB, HDFS and HBase, specifically, which is right. I am looking at making a simple and customized Document Management System for my organization. Basically, we need the ability to store some Word Documents, PDFs and other similar files. I also want to store metadata about these files (e.g., Author, Dates, etc). Usage permissions would also be handy, but that can probably be built using meta-data. I would also need the ability to full-text index. The ability to version, while not required would be extremely useful. I would like the ability to simply add hardware to expand the resources of the system and the system must support Network Attached Storage over the CIFS or NFS protocol(s). I have read about CouchDB, HDFS and HBase. My preferred programming language is C# as all of my end-users will be running Windows machines and I will want to make both web and winforms client implementations. My question is which solution best fits my needs? Based on my research it appears that CouchDB (utilizing the CouchDB-Lounge and CouchDB-Lucene) perfectly fits my needs. However, I am worried that since I have worked with CouchDB that I might be overlooking something useful for my needs in HDFS or HBase or something similar due to a bias. Any and all opinions are welcome as I am looking for the community input as I really do not want to make the wrong choice at the start of my project. Please ask if you need more information. I thank you all for your time, input and assistance.

    Read the article

  • Images in database vs file system

    - by Jesse
    We have a project coming up where we will be building a whole backend CMS system that will power our entire extranet and intranet with one package. The question I have been trying to find an answer to is which is better: storing images in the database (SQL Server 2005) so we may have integrity, single replication plan, etc OR storing on the file system? One issue we have is that we have multiple servers load balanced that require to have the same data at all times. As of now we have SQL replication taking care of that but file replication seems to be a little tougher. Another concern we have is that we would like to have multiple resolutions of the same image, we are not sure if creating and storing each version on the file system would be best or maybe dynamically pulling and creating the resolution image we would like upon request. Our concerns are the with the following: Data integrity Data replication Multiple resolutions Speed of database vs file system Overhead load of database vs file system Data management and backup Does anyone have a similar situation or have any input on what would be recommended? Thanks in advance for the help!

    Read the article

  • Objective-C autorelease pool not releasing object

    - by Chris
    Hi I am very new to Objective-C and was reading through memory management. I was trying to play around a bit with the NSAutoreleasePool but somehow it wont release my object. I have a class with a setter and getter which basically sets a NSString *name. After releasing the pool I tried to NSLog the object and it still works but I guess it should not? @interface TestClass : NSObject { NSString *name; } - (void) setName: (NSString *) string; - (NSString *) name; @end @implementation TestClass - (void) setName: (NSString *) string { name = string; } - (NSString *) name { return name; } @end int main (int argc, const char * argv[]) { NSAutoreleasePool *pool = [[NSAutoreleasePool alloc] init]; TestClass *var = [[TestClass alloc] init]; [var setName:@"Chris"]; [var autorelease]; [pool release]; // This should not be possible? NSLog(@"%@",[var name]); return 0; }

    Read the article

  • Combining properties made available via webservices profile service aspnet

    - by Adam
    I really wasn't sure what the title for my question could be, so sorry if it's a bit vague. I'm working on an application that uses client application services for authentication/profile management etc. In web.config for my website, I have the following profile properties like this: <properties> <add name="FirstName" type="string" defaultValue="" customProviderData="FirstName;nvarchar"/> ... Basic things like first name, last name etc. I'm exposing properties for my client app like this: <system.web.extensions> <scripting> <webServices> <authenticationService enabled="true" requireSSL="false"/> <profileService enabled="true" readAccessProperties="UserProfile" writeAccessProperties="UserProfile"/> <roleService enabled="true"/> </webServices> </scripting> </system.web.extensions> What I'm wondering is whether it's possible to bundle all the individual profile properties into a single object for client apps to utilize? I originally had all my profile data stored as members of a single class (UserProfile) but I broke it all out so that I could use the SqlTableProfileProvider to store each field as individual columns in relevant tables. I know I can create an class with members for each type, I'm just not sure if there's an easy way to create an object with all my property values (other than assigning values to this object whenever I assign to the the standalone properties). I don't think I'm explaining this very well, so I'll try an example. Say in my website profile I have FirstName and LastName as properties. For my client application profileService I want to have one ReadAccessProperty FullName. Is there some way to automatically create FullName from the existing FirstName and LastName properties without having to also have a seperate FullName property (and manually assign data to it whenever I assign data to FirstName and LastName)?

    Read the article

  • Spring - Transaction Readonly

    - by AAK
    Hello Gurus! Just wanted your expert opinions on declarative transaction management for Spring. Here is my setup - A. DAO Layer is Plain old JDBC using jdbcTemplete (No hibernate etc) B. Service Layer is POJO with declarative trasnactions as follows - save*, readonly=false, rollback for Throwable Things work fine with above setup. However when I say get*, readonly=true I see errors in my log file saying - Database connection cannot be marked as readonly. This happens for all get* methods in Service Layer. Now my questions - A. Do I have to say get* as readonly? All my get* methods are pure read DB operations. I do not wish to run them in any transaction context. How serious is the above error? B. When I remove the get* confiiguration, I do not see the errors, morever all my simple get* operations are performed without transactions. Is this the way to go? C. Why would anyone want to have transactional methods where readonly = true? Is there any practical significance of this configuration? Thank you! As always your resposes are much appreciated!

    Read the article

  • What is the best way to auto-generate INSERT statements for a SQL Server table?

    - by JosephStyons
    We are writing a new application, and while testing, we will need a bunch of dummy data. I've added that data by using MS Access to dump excel files into the relevant tables. Every so often, we want to "refresh" the relevant tables, which means dropping them all, re-creating them, and running a saved MS Access append query. The first part (dropping & re-creating) is an easy sql script, but the last part makes me cringe. I want a single setup script that has a bunch of INSERTs to regenerate the dummy data. I have the data in the tables now. What is the best way to automatically generate a big list of INSERT statements from that dataset? I'm thinking of something like in TOAD (for Oracle) where you can right-click on a grid and click Save As-Insert Statements, and it will just dump a big sql script wherever you want. The only way I can think of doing it is to save the table to an excel sheet and then write an excel formula to create an INSERT for every row, which is surely not the best way. I'm using the 2008 Management Studio to connect to a SQL Server 2005 database.

    Read the article

  • Hibernate and parent/child relations

    - by Marco
    Hi to all, I'm using Hibernate in a Java application, and i feel that something could be done better for the management of parent/child relationships. I've a complex set of entities, that have some kind of relationships between them (one-to-many, many-to-many, one-to-one, both unidirectional and bidirectional). Every time an entity is saved and it has a parent, to estabilish the relationship the parent has to add the child to its collection (considering a one-to-may relationship). For example: Parent p = (Parent) session.load(Parent.class, pid); Child c = new Child(); c.setParent(p); p.getChildren().add(c); session.save(c); session.flush(); In the same way, if i remove a child then i have to explicitly remove it from the parent collection too. Child c = (Child) session.load(Child.class, cid); session.delete(c); Parent p = (Parent) session.load(Parent.class, pid); p.getChildren().remove(c); session.flush(); I was wondering if there are some best practices out there to do this jobs in a different way: when i save a child entity, automatically add it to the parent collection. If i remove a child, automatically update the parent collection by removing the child, etc. For example, Child c = new Child(); c.setParent(p); session.save(c); // Automatically update the parent collection session.flush(); or Child c = (Child) session.load(Child.class, cid); session.delete(c); // Automatically updates its parents (could be more than one) session.flush(); Anyway, it would not be difficult to implement this behaviour, but i was wondering if exist some standard tools or well known libraries that deals with this issue. And, if not, what are the reasons? Thanks

    Read the article

  • How to run White + SL4 UATs through TeamCity?

    - by Duncan Bayne
    After experiencing a series of unpleasant issues with TFS, including source code orruption and project management inflexibility, we (meaning the project team of which I'm a part) have decided to move from TFS 2010 to TeamCity + SVN + V1. I've managed to get our MSTest component and unit tests running as part of every build. However, our UATs are failing, and I was hoping for some advice from the TeamCity community as to best practices w.r.t. running web servers and interacting with the desktop. Each of our UAT fixtures starts a web server to host the site, like this: public static void StartWebServer() { var pathToSite = @"C:\projects\myproject\FrontEnd\MyProject.FrontEnd.Web"; var webServer = new Process { StartInfo = new ProcessStartInfo { Arguments = string.Format("/port:9150 /path:\"{0}\"", pathToSite), FileName = @"C:\Program Files (x86)\Common Files\microsoft shared\DevServer\10.0\WebDev.WebServer40.EXE" } }; webServer.Start(); } Needless to say, this doesn't work when running through TeamCity, as the pathToSite value is different each time. I'm hoping there is a way of determining the path into which the the code is checked out prior to building? That would allow me to point the web server at the right place. The other issue is that our UATs use White to drive the Silverlight UI through an instance of Internet Explorer: _browserWindow = InternetExplorer.Launch("http://localhost:9150/index.html#/Home", "Home - Windows Internet Explorer"); _document = _browserWindow.SilverlightDocument; I've ensured that the TeamCity service is granted the ability to interact with the desktop, and I've set the build agent machine up to log in automatically (an open session is a pre-requisite for White to work properly). Is that all I need to do or are there additional steps required?

    Read the article

  • php user authentication libraries / frameworks ... what are the options?

    - by es11
    I am using PHP and the codeigniter framework for a project I am working on, and require a user login/authentication system. For now I'd rather not use SSL (might be overkill and the fact that I am using shared hosting discourages this). I have considered using openID but decided that since my target audience is generally not technical, it might scare users away (not to mention that it requires mirroring of login information etc.). I know that I could write a hash based authentication (such as sha1) since there is no sensitive data being passed (I'd compare the level of sensitivity to that of stackoverflow). That being said, before making a custom solution, it would be nice to know if there are any good libraries or packages out there that you have used to provide semi-secure authentication? I am new to codeigniter, but something that integrates well with it would be preferable. Any ideas? (i'm open to criticism on my approach and open to suggestions as to why I might be crazy not to just use ssl). Thanks in advance. Update: I've looked into some of the suggestions. I am curious to try out zend-auth since it seems well supported and well built. Does anyone have experience with using zend-auth in codeigniter (is it too bulky?) and do you have a good reference on integrating it with CI? I do not need any complex authentication schemes..just a simple login/logout/password-management authorization system. Also, dx_auth seems interesting as well, however I am worried that it is too buggy. Has anybody else had success with this? I realized that I would also like to manage guest users (i.e. users that do not login/register) in a similar way to stackoverflow..so any suggestions that have this functionality would be great

    Read the article

  • Action Filter Dependency Injection in ASP.NET MVC 3 RC2 with StructureMap

    - by Ben
    Hi, I've been playing with the DI support in ASP.NET MVC RC2. I have implemented session per request for NHibernate and need to inject ISession into my "Unit of work" action filter. If I reference the StructureMap container directly (ObjectFactory.GetInstance) or use DependencyResolver to get my session instance, everything works fine: ISession Session { get { return DependencyResolver.Current.GetService<ISession>(); } } However if I attempt to use my StructureMap filter provider (inherits FilterAttributeFilterProvider) I have problems with committing the NHibernate transaction at the end of the request. It is as if ISession objects are being shared between requests. I am seeing this frequently as all my images are loaded via an MVC controller so I get 20 or so NHibernate sessions created on a normal page load. I added the following to my action filter: ISession Session { get { return DependencyResolver.Current.GetService<ISession>(); } } public ISession SessionTest { get; set; } public override void OnResultExecuted(System.Web.Mvc.ResultExecutedContext filterContext) { bool sessionsMatch = (this.Session == this.SessionTest); SessionTest is injected using the StructureMap Filter provider. I found that on a page with 20 images, "sessionsMatch" was false for 2-3 of the requests. My StructureMap configuration for session management is as follows: For<ISessionFactory>().Singleton().Use(new NHibernateSessionFactory().GetSessionFactory()); For<ISession>().HttpContextScoped().Use(ctx => ctx.GetInstance<ISessionFactory>().OpenSession()); In global.asax I call the following at the end of each request: public Global() { EndRequest += (sender, e) => { ObjectFactory.ReleaseAndDisposeAllHttpScopedObjects(); }; } Is this configuration thread safe? Previously I was injecting dependencies into the same filter using a custom IActionInvoker. This worked fine until MVC 3 RC2 when I started experiencing the problem above, which is why I thought I would try using a filter provider instead. Any help would be appreciated Ben P.S. I'm using NHibernate 3 RC and the latest version of StructureMap

    Read the article

  • Which language should I use to program a GUI application?

    - by Roman
    I would like to write a GUI application for management of information (text documents). In more details, it should be similar to the TiddlyWiki. I would like to have there some good visual effects (like nice representation for three structures, which you can rotate, some sound). I also would like to include some communication via Internet (for sharing and collaboration). In should include some features of such applications as a web browser, word processor, Skype. Which programming language should I use? I like the idea of usage of JavaScripts (like TddlyWiki). The good thing about that, is that user should not install anything. They open a file in a browser and it works! The bad thing is that JavaScript cannot communicate via internet with other applications. I think the choice of the programming language, in my case, id conditioned by 2 things: What can be done with this programming language (which restrictions are there). How easy to program. I would like to have "block" which can do a lot of things (rather than to program then and, in this way, to "rediscover a bicycle") ADDED: I would like to make it platform independent.

    Read the article

  • fastest SCM tool available for Embedded software development

    - by wrapperm
    Hi All, In my company, presently we are using Rational clearcase as the Software Configuration Management tool for our Embedded software development. The software is basically for Automobiles, to be specific for Engines (I dont think these information really matters). But I find Clearcase to be very slow is performing any the activities (accesing files, branching and labelling), in addition to which there are various other limitations. We have recently decided to research on some free & open source, distributed version control system which could be able to handle our large projects with speed and efficiency. This tool should be a full-fledged repository with complete history and full revision tracking capabilities, not dependent on network access or a central server. Branching and merging are fast and easy to do. It should have multisite development facility. With these above mentioned requirement, we have come up with some of the tools that are presently available in the market: GIT, Mercurial, Bazaar, Subversion, CVS, Perforce, and Visual SourceSafe. I need everybody's help in finding me an approrpiate SCM tool for me which meets the above mentioned requirements. Thanking you in Advance, Rahamath.

    Read the article

  • What should be the "trunk" development, or release

    - by Nix
    I have the unfortunate opportunity of source control via Borland's StarTeam. It unfortunately does very few things well, and one supreme weakness is its view management. I love SVN and come from an SVN mindset. Our issue is post production release we are spending countless hours merging changes into a "production support" environment. Please do not harass me this was not my doing, I inherited it and am trying to present a better way of managing the repository. It is not an option to switch to a different SCM tool. Current setup Product.1.0 (TRUNK, current production code, and at this level are pending bug fixes) Product.2.0(true trunk anything checked in gets tested, and then released next production cycle, a lot of changes occur in this view) My proposal is going to be to swap them, have all development be done on the trunk (Production), tag on releases, and as needed create child views to represent production support bug fixes. Production Production.2.0.SP.1 I can not find any documentation to support the above proposal so I am trying to get feedback on whether or not the change is a good idea and if there is anything you would recommend doing differently.

    Read the article

  • Runtime Exception when using Custom Healthmonitoring event in medium trust

    - by Elementenfresser
    Hi, I'm using custom healthmonitoring events in ASP.NET We recently moved to a new server with default High Trust Permissions. Literature says that healthmonitoring and custom events should work under Medium or higher trust (http://msdn.microsoft.com/en-us/library/bb398933.aspx). Problem is it doesn't. In less than high trust I get a SecurityException saying The application attempted to perform an operation not allowed by the security policy It works in Full trust or when I remove the inheritance of System.Web.Management.WebErrorEvent. Any suggestions anyone? Here is the super simple code behind with a custom event defined: public partial class Default : System.Web.UI.Page { protected void Page_Load(object sender, EventArgs e) { try { CallCustomEvent(); } catch (Exception ex) { Response.Write(ex.Message); throw ex; } } /// <summary> /// this metho is never called due to lacking permissions... /// </summary> private void CallCustomEvent() { try { //do something useful here } catch (Exception) { //code to instantiate the forbidden inheritance... WebBaseEvent.Raise(new CustomEvent()); } } } /// <summary> /// custom error inheriting WebErrorEvent which is not allowed in high trust? can't believe that... /// </summary> public class CustomEvent : WebErrorEvent { public CustomEvent() : base("test", HttpContext.Current.Request, 100001, new ApplicationException("dummy")) { } } and the Web Config excerpt for high trust: <system.web> <trust level="High" originUrl="" />

    Read the article

  • SSRS2005 timeout error

    - by jaspernygaard
    Hi I've been running around circles the last 2 days, trying to figure a problem in our customers live environment. I figured I might as well post it here, since google gave me very limited information on the error message (5 results to be exact). The error boils down to a timeout when requesting a certain report in SSRS2005, when a certain parameter is used. The deployment scenario is: Machine #1 Running reporting services (SQL2005, W2K3, IIS6) Machine #2 Running datawarehouse database (SQL2005, W2K3) which is the data source for #1 Both machines are running on the same vm cluster and LAN. The report requests a fairly simple SP - lets called it sp(param $a, param $b). When requested with param $a filled, it executes correctly. When using param $b, it times out after the global timeout periode has passed. If I run the stored procedure with param $b directly from sql management studio on #2, it returns the results perfectly fine (within 3-4s). I've profiled the datawarehouse database on #2 and when param $b is used, the query from the reporting service to the database, never reaches #2. The error message that I get upon timeout, when using param $b, when invoking the report directly from SSRS web interface is: "An error has occurred during report processing. Cannot read the next data row for the data set DataSet. A severe error occurred on the current command. The results, if any, should be discarded. Operation cancelled by user." The ExecutionLog for the SSRS does give me much information besides the error message rsProcessingAborted I'm running out of ideas of how to nail this problem. So I would greatly appreciate any comments, suggestions or ideas. Thanks in advance!

    Read the article

  • Exec problem in SQL Server 2005

    - by IordanTanev
    Hi, I have the situation where i have two databases with same structure. The first have some data in its data tables. I need to create a script that will transfer the data from the first database to the second. I have created this script. DECLARE @table_name nvarchar(MAX), @query nvarchar(MAX) DECLARE @table_cursor CURSOR SET @table_cursor = CURSOR FAST_FORWARD FOR SELECT TABLE_NAME FROM INFORMATION_SCHEMA.TABLES OPEN @table_cursor FETCH NEXT FROM @table_cursor INTO @table_name WHILE @@FETCH_STATUS = 0 BEGIN SET @query = 'INSERT INTO ' + @table_name + ' SELECT * FROM MyDataBase.dbo.' + @table_name print @query exec @query FETCH NEXT FROM @table_cursor INTO @table_name END CLOSE @table_cursor DEALLOCATE @table_cursor The problem is that when I run the script the "print @query" statement prints statement like this INSERT INTO table SELECT * FROM MyDataBase.dbo.table When I copy this and run it from Management studio it works fine. But when the script tries to run it with exec I get this error Msg 911, Level 16, State 1, Line 21 Could not locate entry in sysdatabases for database 'INSERT INTO table SELECT * FROM MPDEV090314'. No entry found with that name. Make sure that the name is entered correctly. Hope someone can tell me whot is wront with this. Best Regards, Iordan Tanev

    Read the article

  • Best practises for Magento Deployment

    - by Spongeboy
    I am looking setting up a deployment process for a highly customised Magento site, and was wondering how other people do this. I will be setting up dev, UAT and prod environments. All the Magento files will be in source control (SVN). At this stage, I can't see any requirements for changing the DB, so the 3 databases will be manually maintained. Specifically, How do you apply Magento upgrades? (Individually in each env, or on dev then roll out, or just give up on upgrades?) What files/folders do leave alone in each environment (e.g. magento/app/etc/local.xml) Do you restrict developers to editing specific files/folders? Do you restrict theme designers to editing specific files/folders? How do you manage database changes? Theme Designer Files/Folders Designers can restricted to editing the following folders- app/design/frontend/your_interface/your_theme/layout/ app/design/frontend/your_interface/your_theme/template/ app/design/frontend/your_interface/your_theme/locale/ skin/frontend/your_interface/your_theme/ Extension Developer Files/Folders Extension developers can edit the following folders/files- /app/code/local /app/etc/modules/<Namespace>_<Module>.xml Database environment management As the store's base URL is stored in the database, you cannot just copy databases between environments. Options include- Overriding the base url in php. Blog article on setting up dev and staging databases Changing the base url in the database after copying. (Where is this stored?) Doing a MySQLDump or backup, then doing a replace on the URL in the SQL file.

    Read the article

  • Directly call distutils' or setuptools' setup() function with command name/options, without parsing

    - by Ryan B. Lynch
    I'd like to call Python's distutils' or setuptools' setup() function in a slightly unconventional way, but I'm not sure whether distutils is meant for this kind of usage. As an example, let's say I currently have a 'setup.py' file, which looks like this (lifted verbatim from the distutils docs--the setuptools usage is almost identical): from distutils.core import setup setup(name='Distutils', version='1.0', description='Python Distribution Utilities', author='Greg Ward', author_email='[email protected]', url='http://www.python.org/sigs/distutils-sig/', packages=['distutils', 'distutils.command'], ) Normally, to build just the .spec file for an RPM of this module, I could run python setup.py bdist_rpm --spec-only, which parses the command line and calls the 'bdist_rpm' code to handle the RPM-specific stuff. The .spec file ends up in './dist'. How can I change my setup() invocation so that it runs the 'bdist_rpm' command with the '--spec-only' option, WITHOUT parsing command-line parameters? Can I pass the command name and options as parameters to setup()? Or can I manually construct a command line, and pass that as a parameter, instead? NOTE: I already know that I could call the script in a separate process, with an actual command line, using os.system() or the subprocess module or something similar. I'm trying to avoid using any kind of external command invocations. I'm looking specifically for a solution that runs setup() in the current interpreter. For background, I'm converting some release-management shell scripts into a single Python program. One of the tasks is running 'setup.py' to generate a .spec file for further pre-release testing. Running 'setup.py' as an external command, with its own command line options, seems like an awkward method, and it complicates the rest of the program. I feel like there may be a more Pythonic way.

    Read the article

  • Reuse, Rewrite, or Refactor?

    - by Jon Purdy
    At work I inherited development of a PHP-based Web site after the consultant who originally produced it bailed out and left without a trace. Literally half of the code is ripped from online tutorials, and there are thousands of lines of cruft that, being incomplete, do precious little. Hardly any of it actually works. I've been trying to pull out the usable components, such as the layout (cleverly intermixed with code), session management (delicately seasoned with unescaped, unvalidated SQL queries), and a few other things, but it's very difficult to force all of this junk into place. Further, I don't speak idiomatic PHP, being more of a Perl user, and I'm supposed to be on this project principally for maintenance, so rewriting everything seems like it would take just as long as wrestling the existing monster back into place. As an aside, I have literally never seen anything as badly written as this. Welcome me to the world of working with other people's code, I guess, but I do hope it's not this common in the real world to have such gems as these: // WHY IS THIS NOT WORKING // I know this is bad but were going for working stuff right now... // This is a PHP code outputing Javascript code outputting HTML...do not go further // Not userful I'm looking for the best advice I can get here. What would you do if you were in my position?

    Read the article

  • Insert a datetime value with GetDate() function to a SQL server (2005) table?

    - by David.Chu.ca
    I am working (or fixing bugs) on an application which was developed in VS 2005 C#. The application saves data to a SQL server 2005. One of insert SQL statement tries to insert a time-stamp value to a field with GetDate() TSQL function as date time value. Insert into table1 (field1, ... fieldDt) values ('value1', ... GetDate()); The reason to use GetDate() function is that the SQL server may be at a remove site, and the date time may be in a difference time zone. Therefore, GetDate() will always get a date from the server. As the function can be verified in SQL Management Studio, this is what I get: SELECT GetDate(), LEN(GetDate()); -- 2010-06-10 14:04:48.293 19 One thing I realize is that the length is not up to the milliseconds, i.e., 19 is actually for '2010-06-10 14:04:48'. Anyway, the issue I have right now is that after the insert, the fieldDt actually has a date time value up to minutes, for example, '2010-06-10 14:04:00'. I am not sure why. I don't have permission to update or change the table with a trigger to update the field. My question is that how I can use a INSERT T-SQL to add a new row with a date time value ( SQL server's local date time) with a precision up to milliseconds?

    Read the article

< Previous Page | 569 570 571 572 573 574 575 576 577 578 579 580  | Next Page >