Search Results

Search found 47408 results on 1897 pages for 'database machine'.

Page 509/1897 | < Previous Page | 505 506 507 508 509 510 511 512 513 514 515 516  | Next Page >

  • Missing DLL Problem

    - by Liran
    Hi everyone. I have a C++ native application that was built under VS2005 (sp1),On machine A. (Debug Mode) Now,I need to run this application on a "clean" computer, Clean means it has no VS installed on. When i copy the runtime folder from machine A to the "clean" machine and try to activate the application it demands to reinstall the application. obviously missing DLLs are causing this problem cause on machine A the app works just fine, Is there any "clean" solution for this kind of problem besides gessing which DLLs are missing ? maybe a smart tool or installer that indicates which DLLs are missing at the runtime ? Thanks, Liran

    Read the article

  • How to manage maintenance/bug-fix branches in Subversion when third-party installers are involved?

    - by Mike Spross
    We have a suite of related products written in VB6, with some C# and VB.NET projects, and all the source is kept in a single Subversion repository. We haven't been using branches in Subversion (although we do tag releases now), and simply do all development in trunk, creating new releases when the trunk is stable enough. This causes no end of grief when we release a new version, issues are found with it, and we have already begun working on new features or major changes to the trunk. In the past, we would address this in one of two ways, depending on the severity of the issues and how stable we thought the trunk was: Hurry to stabilize the trunk, fix the issues, and then release a maintenance update based on the HEAD revision, but this had the side effect of releases that fixed the bugs but introduced new issues because of half-finished features or bugfixes that were in trunk. Make customers wait until the next official release, which is usually a few months. We want to change our policies to better deal with this situation. I was considering creating a "maintenance branch" in Subversion whenever I tag an official release. Then, new development would continue in trunk, and I can periodically merge specific fixes from trunk into the maintenance branch, and create a maintenance release when enough fixes are accumulated, while we continue to work on the next major update in parallel. I know we could also have a more stable trunk and create a branch for new updates instead, but keeping current development in trunk seems simpler to me. The major problem is that while we can easily branch the source code from a release tag and recompile it to get the binaries for that release, I'm not sure how to handle the setup and installer projects. We use QSetup to create all of our setup programs, and right now when we need to modify a setup project, we just edit the project file in-place (all the setup projects and any dependencies that we don't compile ourselves are stored on a separate server, and we make sure to always compile the setup projects on that machine only). However, since we may add or remove files to the setup as our code changes, there is no guarantee that today's setup projects will work with yesterday's source code. I was going to put all the QSetup projects in Subversion to deal with this, but I see some problems with this approach. I want the creation of setup programs to be as automated as possible, and at the very least, I want a separate build machine where I can build the release that I want (grabbing the code from Subversion first), grab the setup project for that release from Subversion, recompile the setup, and then copy the setup to another place on the network for QA testing and eventual release to customers. However, when someone needs to change a setup project (to add a new dependency that trunk now requires or to make other changes), there is a problem. If they treat it like a source file and check it out on their own machine to edit it, they won't be able to add files to the project unless they first copy the files they need to add to the build machine (so they are available to other developers), then copy all the other dependencies from the build machine to their machine, making sure to match the folder structure exactly. The issue here is that QSetup uses absolute paths for any files added to a setup project. However, this means installing a bunch of setup dependencies onto development machines, which seems messy (and which could destabilize the development environment if someone accidentally runs the setup project on their machine). Also, how do we manage third-party dependencies? For example, if the current maintenance branch used MSXML 3.0 and the trunk now requires MSXML 4.0, we can't go back and create a maintenance release if we have already replaced the MSXML library on the build machine with the latest version (assuming both versions have the same filename). The only solution I can think is to either put all the third-party dependencies in Subversion along with the source code, or to make sure we put different library versions in separate folders (i.e. C:\Setup\Dependencies\MSXML\v3.0 and C:\Setup\Dependencies\MSXML\v4.0). Is one way "better" or more common than the other? Are there any best practices for dealing with this situation? Basically, if we release v2.0 of our software, we want to be able to release v2.0.1, v2.0.2, and v.2.0.3 while we work on v2.1, but the whole setup/installation project and setup dependency issue is making this more complicated than the the typical "just create a branch in Subversion and recompile as needed" answer.

    Read the article

  • How to save, retrieve and draw an image using postgresql an java (no path saving)?

    - by spderosso
    Hi, Given an object X; I want this object to have an image. The image must be stored in the database. I can't store the path, the actual image must be in the database. My question can be answered by answering the following subquestions: a) What type of field should I put in the database? (e.g VARCHAR) b) What type of object should I use for storing and manipulating the image (at an object layer)? (e.g java.awt.Image) c) How do I create an object of the type selected (answer of question b) from the data obtained from the database? d) How do I save an object of the type selected (answer of question b) to the database? e) How do I draw the image on a web page? I am using postgresql, java and it is a web application. Thanks!

    Read the article

  • How to handle set based consistency validation in CQRS?

    - by JD Courtoy
    I have a fairly simple domain model involving a list of Facility aggregate roots. Given that I'm using CQRS and an event-bus to handle events raised from the domain, how could you handle validation on sets? For example, say I have the following requirement: Facility's must have a unique name. Since I'm using an eventually consistent database on the query side, the data in it is not guaranteed to be accurate at the time the event processesor processes the event. For example, a FacilityCreatedEvent is in the query database event processing queue waiting to be processed and written into the database. A new CreateFacilityCommand is sent to the domain to be processed. The domain services query the read database to see if there are any other Facility's registered already with that name, but returns false because the CreateNewFacilityEvent has not yet been processed and written to the store. The new CreateFacilityCommand will now succeed and throw up another FacilityCreatedEvent which would blow up when the event processor tries to write it into the database and finds that another Facility already exists with that name.

    Read the article

  • MySQL with Java: Open connection only if possible

    - by emempe
    I'm running a database-heavy Java application on a cluster, using Connector/J 5.1.14. Therefore, I have up to 150 concurrent tasks accessing the same MySQL database. I get the following error: Exception in thread "main" com.mysql.jdbc.exceptions.jdbc4.MySQLNonTransientConnectionException: Too many connections This happens because the server can't handle so many connections. I can't change anything on the database server. So my question is: Can I check if a connection is possible BEFORE I actually connect to the database? Something like this (pseudo code): check database for open connection slots if (slot is free) { Connection cn = DriverManager.getConnection(url, username, password); } else { wait ... } Cheers

    Read the article

  • weird access denied issue with WMI

    - by stackunderflow1
    I'm seeing a weird access denied issue with WMI. we're trying to create a diff disk based on a parent vhd in a windows service app that runs under network service (machine account is an admin). everything works fine when we create the diff disk on other machine using wmi - we use an admin user account. however, we cannot do this on a local machine as wmi doesn't take user cred for the local machine. so thought network service account already should have access for this, it seems like it doesn't and even if we run the service under an admin service account, it fails. any pointers???

    Read the article

  • What are these stray zero-byte files extracted from tarball? (OSX)

    - by Scott M
    I'm extracting a folder from a tarball, and I see these zero-byte files showing up in the result (where they are not in the source.) Setup (all on OS X): On machine one, I have a directory /My/Stuff/Goes/Here/ containing several hundred files. I build it like this tar -cZf mystuff.tgz /My/Stuff/Goes/Here/ On machine two, I scp the tgz file to my local directory, then unpack it. tar -xZf mystuff.tgz It creates ~scott/My/Stuff/Goes/, but then under Goes, I see two files: Here/ - a directory, Here.bGd - a zero byte file. The "Here.bGd" zero-byte file has a random 3-character suffix, mixed upper and lower-case characters. It has the same name as the lowest-level directory mentioned in the tar-creation command. It only appears at the lowest level directory named. Anybody know where these come from, and how I can adjust my tar creation to get rid of them? Update: I checked the table of contents on the files using tar tZvf: toc does not list the zero-byte files, so I'm leaning toward the suggestion that the uncompress machine is at fault. OS X is version 10.5.5 on the unzip machine (not sure how to check the filesystem type). Tar is GNU tar 1.15.1, and it came with the machine.

    Read the article

  • Logging Application Block doesn't add log entries to Event Viewer on machines other than that on whi

    - by Neo
    I am using the Logging Application Block (of Microsoft Enterprise Library 5.0) to log exceptions in the Event Viewer that occur in my WPF XBAP application. However, exceptions are only being logged if the application is run on my machine (the machine it was built on). Any other machine it doesn't log anything. I've tried to find a reason why this might be occurring - I've tried setting requirePermission to false - but to no avail. Anyone any ideas on why this might be happening?

    Read the article

  • Oracle Connection exception via JDBC

    - by sachin
    I have installed Oracle 11gR2 on my machine, now when i try to connect to it using IP address as 'localhost' or '127.0.0.1' there is no issue, but when I use ip address of machine '192.168.1.6' it throws exception: Io exception: Then Network Adapter could not establish the connection. I have installed ms loopback adapter prior to installation and my machine get IP from DHCP. do i need to configure any setting oracle config or what i might be missing here?

    Read the article

  • Using XA Transactions in Coherence-based Applications

    - by jpurdy
    While the costs of XA transactions are well known (e.g. increased data contention, higher latency, significant disk I/O for logging, availability challenges, etc.), in many cases they are the most attractive option for coordinating logical transactions across multiple resources. There are a few common approaches when integrating Coherence into applications via the use of an application server's transaction manager: Use of Coherence as a read-only cache, applying transactions to the underlying database (or any system of record) instead of the cache. Use of TransactionMap interface via the included resource adapter. Use of the new ACID transaction framework, introduced in Coherence 3.6.   Each of these may have significant drawbacks for certain workloads. Using Coherence as a read-only cache is the simplest option. In this approach, the application is responsible for managing both the database and the cache (either within the business logic or via application server hooks). This approach also tends to provide limited benefit for many workloads, particularly those workloads that either have queries (given the complexity of maintaining a fully cached data set in Coherence) or are not read-heavy (where the cost of managing the cache may outweigh the benefits of reading from it). All updates are made synchronously to the database, leaving it as both a source of latency as well as a potential bottleneck. This approach also prevents addressing "hot data" problems (when certain objects are updated by many concurrent transactions) since most database servers offer no facilities for explicitly controlling concurrent updates. Finally, this option tends to be a better fit for key-based access (rather than filter-based access such as queries) since this makes it easier to aggressively invalidate cache entries without worrying about when they will be reloaded. The advantage of this approach is that it allows strong data consistency as long as optimistic concurrency control is used to ensure that database updates are applied correctly regardless of whether the cache contains stale (or even dirty) data. Another benefit of this approach is that it avoids the limitations of Coherence's write-through caching implementation. TransactionMap is generally used when Coherence acts as system of record. TransactionMap is not generally compatible with write-through caching, so it will usually be either used to manage a standalone cache or when the cache is backed by a database via write-behind caching. TransactionMap has some restrictions that may limit its utility, the most significant being: The lock-based concurrency model is relatively inefficient and may introduce significant latency and contention. As an example, in a typical configuration, a transaction that updates 20 cache entries will require roughly 40ms just for lock management (assuming all locks are granted immediately, and excluding validation and writing which will require a similar amount of time). This may be partially mitigated by denormalizing (e.g. combining a parent object and its set of child objects into a single cache entry), at the cost of increasing false contention (e.g. transactions will conflict even when updating different child objects). If the client (application server JVM) fails during the commit phase, locks will be released immediately, and the transaction may be partially committed. In practice, this is usually not as bad as it may sound since the commit phase is usually very short (all locks having been previously acquired). Note that this vulnerability does not exist when a single NamedCache is used and all updates are confined to a single partition (generally implying the use of partition affinity). The unconventional TransactionMap API is cumbersome but manageable. Only a few methods are transactional, primarily get(), put() and remove(). The ACID transactions framework (accessed via the Connection class) provides atomicity guarantees by implementing the NamedCache interface, maintaining its own cache data and transaction logs inside a set of private partitioned caches. This feature may be used as either a local transactional resource or as logging XA resource. However, a lack of database integration precludes the use of this functionality for most applications. A side effect of this is that this feature has not seen significant adoption, meaning that any use of this is subject to the usual headaches associated with being an early adopter (greater chance of bugs and greater risk of hitting an unoptimized code path). As a result, for the moment, we generally recommend against using this feature. In summary, it is possible to use Coherence in XA-oriented applications, and several customers are doing this successfully, but it is not a core usage model for the product, so care should be taken before committing to this path. For most applications, the most robust solution is normally to use Coherence as a read-only cache of the underlying data resources, even if this prevents taking advantage of certain product features.

    Read the article

  • MVC C# Controller Method to return Tables

    - by Rob Tiu
    I'm a real beginner with MVC and my issue is this, I have a mdf database with multiple tables and I want to have a method return "ANY" table from the database and pass it to a aspx view. Examples of other tables in the database: Articles, Products, Supplies Here is an example of my code to view an Article Table from the database: //USING LINQ-SQL CONTEXT DATABASE public ActionResult ArticlePage() { tinypeas_db_contextDataContext context = HttpContext.Application["context"] as tinypeas_db_contextDataContext; try { return View(context.Articles); } catch { return Json(false, JsonRequestBehavior.AllowGet); } } How would I modify this method to dynamically pass any table to the view? Or should I be using something else other than Linq-to-SQL

    Read the article

  • Two Instances of Sql Server (2005 and 2008)

    - by Felipe
    Hi All, I installed Visual Studio 2008 Professional in my machine and It had installed SQL Server Express 2005 database in machine, and I use it very fine! I installed SQL Managment Studio and works great. So, in this week I Installed Visual Studio 2010 Pro in machine and the setup installed the SQL Server express 2008 and it overwrite the instance of my SQL Server Express 2005. All right, Now, I'd like to know how can I have two instances of the SQL Server Express in my Machine, Express 2005 and Express 2008. I can not access the 2005 , only 2008 :( and my projects uses 2005.. Somebody Help me! thanks Bye

    Read the article

  • Java long task - Did it stop writing to file?

    - by rockit
    I am writing a lot of data to a file, and while keeping my eye on the file it eventually stopped growing in size. Essentially my task is getting information from a database, and printing out all non-unique values in column A. Since there are many rows to the database table, and the database table is across my network, this is taking days to complete. Thus I'm concerned that since the file isn't growing, that it isn't actually writing to the file anymore. Which is odd, I have no "catch"'s in my code, so if there was a problem writing to file, wouldn't it have thrown an error?! Should I let the task complete (estimate 2-3 days from today), or is there something else that I don't know going on here making my application not write to the file?! my algorithm goes something like this Declare file Create new file Open file for writing get database connection get resultset from database for each row in the resultset - write column "A" to file - if row# % 100000 then write to screen "completed " + row# + " rows" when no more rows exist close file write to screen - "completed"

    Read the article

  • SQL Server missing tables and stored procedures

    - by Robo
    I have an application on a client's site that processes data each night, last night SQL Server 2005 gave the error "Could not find stored procedure 'xxxx'". The stored procedure does exist in the database, has the right permission as far as I can tell, the application runs fine in other nights as well. In previous occasions, the SQL Server has also gave error saying 'database object not found', and refers to a table in the database that does exists. So, on rare occasions, the server thinks certain stored procedures and tables does not exist in the database. The objects it refers to are often ones that are frequently used. Is the database somehow corrupted, is there some sort of repair/health check I can do?

    Read the article

  • Separation of domain and ui layer in a composite

    - by hansmaad
    Hi all, i'm wondering if there is a pattern how to separate the domain logic of a class from the ui responsibilities of the objects in the domain layer. Example: // Domain classes interface MachinePart { CalculateX(in, out) // Where do we put these: // Draw(Screen) ?? // ShowProperties(View) ?? // ... } class Assembly : MachinePart { CalculateX(in, out) subParts } class Pipe : MachinePart { CalculateX(in, out) length, diamater... } There is an application that calculates the value X for machines assembled from many machine parts. The assembly is loaded from a file representation and is designed as a composite. Each concrete part class stores some data to implement the CalculateX(in,out) method to simulate behaviour of the whole assembly. The application runs well but without GUI. To increase the usability a GUi should be developed on top of the existing implementation (changes to the existing code are allowed). The GUI should show a schematic graphical representation of the assembly and provide part specific dialogs to edit several parameters. To achieve these goals the application needs new functionality for each machine part to draw a schematic representation on the screen, show a property dialog and other things not related to the domain of machine simulation. I can think of some different solutions to implement a Draw(Screen) functionality for each part but i am not happy with each of them. First i could add a Draw(Screen) method to the MachinePart interface but this would mix-up domain code with ui code and i had to add a lot of functionality to each machine part class what makes my domain model hard to read and hard to understand. Another "simple" solution is to make all parts visitable and implement ui code in visitors but Visitor does not belong to my favorite patterns. I could derive UI variants from each machine part class to add the UI implementation there but i had to check if each part class is suited for inheritance and had to be careful on changes to the base classes. My currently favorite design is to create a parallel composite hierarchy where each component stores data to define a machine part, has implementation for UI methods and a factory method which creates instances of the corresponding domain classes, so that i can "convert" a UI assembly to a domain assembly. But there are problems to go back from the created domain hierarchy to the UI hierarchy for showing calculation results in the drawing for example (imagine some parts store some values during the calculation i want to show in the schematic representation after the simluation). Maybe there are some proven patterns for such problems?

    Read the article

  • After mounting using sshfs I cannot commit my changes using subversion

    - by robUK
    Hello, local machine: Fedora 13 Subversion: 1.6.9 remote machine: CentSO 5.3 subversion 1.4.2 I have a project which is on the remote machine: [email protected]:projects/ssd1 I have mounted this on my local machine: sshfs [email protected]:projects/ssd1 /home/jbloggs/projects/mnt/ssd1 Everything mounts ok. So I open my project using GNU Emacs 23.2.1. When I want to comment my changes in emacs I get the following error: can't move /home/jbloggs/projects/mnt/ssd1/.svn/tmp/entries to /home/jbloggs/mnt/ssd1/.svn/entries: Operation not permitted Does anyone know of any way I can resolve this issue? many thanks for any advice,

    Read the article

  • Asynchronous File I/O in .NET

    - by uno
    I followed the example at this link, Async I/O). The example works on my local machine. However when I deploy to my test machine - Windows Server 2003, It seems to work on 24 files and then the application stops. Procmon shows that its working on 24 files and then there is no data. My local machine is Windows XP. The question is why would this behave this drastically between XP and Windows 2003

    Read the article

  • I can connect to Samba server but cannot access shares.

    - by jlego
    I'm having trouble getting samba sharing working to access shares. I have setup a stand-alone box running Fedora 16 to use as a file-sharing and web development server. It needs to be able to share files with a Windows 7 PC and a Mac running OSX Snow Leopard. I've setup Samba using the Samba configuration GUI tool on Fedora. Added users to Fedora and connected them as Samba users (which are the same as the Windows and Mac usernames and passwords). The workgroup name is the same as the Windows workgroup. Authentication is set to User. I've allowed Samba and Samba client through the firewall and set the ethernet to a trusted port in the firewall. Both the Windows and Mac machines can connect to the server and view the shares, however when trying to access the shares, Windows throws error: 0x80070035 " Windows cannot access \\SERVERNAME\ShareName." Windows user is not prompted for a username or password when accessing the server (found under "Network Places"). This also happens when connecting with the IP rather than the server name. The Mac can also connect to the server and see the shares but when choosing a share gives the error: The original item for ShareName cannot be found. When connecting via IP, the Mac user is prompted for username and password, which when authenticated gives a list of shares, however when choosing a share to connect to, the error is displayed and the user cannot access the share. Since both machines are acting similarly when trying to access the shares, I assume it is an issue with how Samba is configured. smb.conf: [global] workgroup = workgroup server string = Server log file = /var/log/samba/log.%m max log size = 50 security = user load printers = yes cups options = raw printcap name = lpstat printing = cups [homes] comment = Home Directories browseable = no writable = yes [printers] comment = All Printers path = /var/spool/samba browseable = yes printable = yes [FileServ] comment = FileShare path = /media/FileServ read only = no browseable = yes valid users = user1, user2 [webdev] comment = Web development path = /var/www/html/webdev read only = no browseable = yes valid users = user1 How do I get samba sharing working? UPDATE: I Figured it out, it was because I was sharing a second hard drive. See checked answer below. Speculation 1: Before this box I had another box with the same version of fedora installed (16) and samba working for these same computers. I started up the old machine and copied the smb.conf file from the old machine to the new one (editing the share definitions for the new shares of course) and I still get the same errors on both client machines. The only difference in environment is the hardware and the router. On the old machine the router received a dynamic public IP and assigned dynamic private IPs to each device on the network while the new machine is connected to a router that has a static public IP (still dynamic internal IPs though.) Could either one of these be affecting Samba? Speculation 2: As the directory I am trying to share is actually an entire internal disk, I have tried these things: 1.) changing the owner of the mounted disk from root to my user (which is the same username as on the Windows machine) 2.) made a share that only included one of the folders on the disk instead of the entire disk with my user again as the owner. Both tests failed giving me the same errors regarding the network address. Speculation 3: Whenever I try to connect to the share on the Windows 7 client I am prompted for my username and password. When I enter the correct credentials I get an access denied message. However I did notice that under the login box "domain: WINDOWS-PC-NAME" is listed. I believe this could very well be the problem. Speculation 4: So I've completely reinstalled Fedora and Samba now. I've created a share on the first harddrive (one fedora is installed on) and I can access that fine from Windows. However when I try to share any data on the second disk, I am receiving the same error. This I believe is the problem. I think I need to change some things in fstab or fdisk or something. Speculation 5: So in fstab I mapped the drive to automount in a folder which works correctly. I also added the samba_share_t SElinux label to the mountpoint directory which now allows me to access the shares on the Windows machine, however I cannot see any of the files in the directory on the windows machine. (They are there, I can see them in the fedora file browser locally)

    Read the article

  • Passwordless SSH using cgi-perl script

    - by AV
    Hello, This is my first shot at trying out cgi-perl scripts. I have SSH keys set up between my (root user) local machine and a remote machine. I'm trying to run a command on the remote box and display the output on a webpage hosted from my local machine. The script runs fine from command line however, it throws SSH key error when called from the webpage because the user running the script is apache and not root. Is there a way to get around this issue?

    Read the article

  • Different versions in manifest on different machines

    - by Terry777
    Hi guys, Have two machines, both with VS2005 SP1 installed and with the WinSXS showing the same things installed. When one machine builds a particular C++ .dll .vcproj it ends up with <assemblyIdentity type='win32' name='Microsoft.VC80.MFC' version='8.0.50727.762' processorArchitecture='x86' publicKeyToken='1fc8b3b9a1e18e3b' /> in its manifest file. But on the other machine it ends up with <assemblyIdentity type='win32' name='Microsoft.VC80.MFC' version='8.0.50608.0 processorArchitecture='x86' publicKeyToken='1fc8b3b9a1e18e3b' /> even though this machine does not have '8.0.50608.0' libraries listed in its WinSXS. The .dll built on this machine with the older version referenced has some problems. I have ensured both machines have the same latest source code and references etc.. What could be causing it to build with the different reference? Thanks! Terry

    Read the article

  • Work with Sun Solaris Operating System from .Net Based Application

    - by Harryboy
    Hello Friends, I am having a very strange requirement, Our client is having one network management system (Netcool) which reads the number of machine from two text file. Now whenever new machine is added in those text files application needs restart. We need to develop one GUI which writes the new machine into network in those files and restart the said application. I was in favor of java based application for the same but here everybody wants solution in ASP .Net Now i am not sure is it possible to write file on sun solaris based operating system from .net application and then restarting the process which is running on the same machine. Please suggest me, it would be great if you are having any articles or examples for the same.

    Read the article

  • Detecting operating system or computer name through a Java servlet

    - by Ankur
    I have a Java web app that I develop on a Windows machine and will deploy on a Unix machine. There are some file path settings and permissions details that differ on the two (and there is nothing I can do to change this). Is there some way of detecting which machine the app is sitting on (it's only one of two), either by detecting the operating system or the computer's name so I can then using the appropriate settings.

    Read the article

< Previous Page | 505 506 507 508 509 510 511 512 513 514 515 516  | Next Page >