Search Results

Search found 69877 results on 2796 pages for 'ibm data studio'.

Page 98/2796 | < Previous Page | 94 95 96 97 98 99 100 101 102 103 104 105  | Next Page >

  • How to pin "Visual Studio 2010 Documentation" shortcut to Windows 7 taskbar?

    - by Chris W. Rea
    I just installed Microsoft Visual Studio 2010 at home, on my Windows 7 PC. One of the items installed with VS2010 is "Microsoft Visual Studio 2010 Documentation". I like to have the documentation installed locally and at my fingertips, and so before had always added a shortcut for the help viewer to my Quick Launch toolbar. However, I'm not able to pin the new documentation to the Windows 7 taskbar. It's frustrating. Note carefully: When I launch "Microsoft Visual Studio 2010 Documentation" from the Start menu, it seems to perform two functions: First, it launches the "Help Library Agent", which is a local HTTP server from which the help content is served... similar to the local ASP.NET web development server. Second, it launches the default web browser against the localhost URL corresponding to the port on which the "Help Library Agent" is running, for example: http://127.0.0.1:47873/help/1-1444/ms.help?method=f1&query=msdnstart&product=VS&productVersion=100&locale=en-US ... in other words, the program doesn't leave behind an active foreground process that displays in the taskbar. So, I can't choose "Pin this program to taskbar" as one might do so with a typical program. How can I get a shortcut to "Microsoft Visual Studio 2010 Documentation" in the Windows 7 taskbar? Has anybody got a workaround for this?

    Read the article

  • New virtualization project and old SAN

    - by Chris
    Hi, We'll start shortly a partial virtualization of our infrastructure and consolidate a dozen servers into virtuals instances. We'll also add some client application virtualization into the mix for good measure. Two HP DL 380 with the new xeons 56xx and 96 GB of memory each running xenserver + xenapp will then take charge of most of our IT needs. So far, so good. One element that is missing from the picture is the storage part. We need some sort of shared storage to enable live motion and other HA features. We have an IBM DS 4300 SAN that we can use for that. But since it's in production since 2005, I'm not sure about such a critical role for a 5yr old part. So my question is: What is the reliability of this kind of equipment after 5 yr ? Can it last 10 yr with no or few problems ? Since our budjet is tight, not buying another SAN will be a big plus. This lead me to another question: FC disks cost an arm and a leg from IBM. When I type the replacement # in google (for example IBM 300GB 15K 4GBPS FC HDD 42D0410), I can find it at a fraction of the price at various sites. So am I stupid to buy from IBM or naive to trust 3rd party reseller ?? Thanks, Chris

    Read the article

  • Excel 2010 data validation warning (compatibility mode)

    - by Madmanguruman
    We have some legacy worksheets that were created in Excel 2003, which are used by LabVIEW-based test automation software. The current LabVIEW software can only handle the legacy .xls format, so we're forced to keep these worksheets as-is for the time being. We've migrated to Office 2010 and when working with these worksheets, I see this warning: "The following features in this workbook are not supported by earlier versions of Excel. These features may be lost or degraded when you save this workbook in the currently selected file format. Click Continue to save the workbook anyway. To keep all of your features, click Cancel and then save the file in one of the new file formats." "Significant loss of functionality" "One or more cells in this workbook contain data validation rules which refer to values on other worksheets. These data validation rules will not be saved." When I click 'Find', some cells that do indeed have validation rules are highlighted, but those rules are all on the same worksheet! We're using simple list-based validation, with some cells off to the side containing the valid values (for example, cell B4 has a List with Source "=$D$4:$E$4") This makes no sense to me whatsoever. One, the workbook was created in Excel 2003, so obviously we couldn't implement a feature that doesn't exist. Secondly, the modifications we're making don't involve changing the validation rules at all. Thirdly, the complaint that Excel is making is incorrect! All of the rules are on the same worksheet as the target. As if the story wasn't bizarre enough: I went ahead and saved the worksheet with Excel 2010. I then went to an old computer back in the lab and opened the document with Excel 2003. Guess what - the validations were untouched! My questions are: is this a legitimate bug in Excel 2010, or is this some exotic error in the legacy .xls worksheet that is confusing the heck out of Excel 2010? Has anyone else observed this issue working in compatibility mode?

    Read the article

  • Tools for displaying a multidimensional data table?

    - by ShreevatsaR
    [Apologies if this sort of question is off-topic for SuperUser. Please redirect to the right place if so.] There is a 3-dimensional array of values. (That is, instead of a table/2-dimensional array with values in a grid, the values can be thought of in a cube instead.) Is there a way to display this "cube" interactively, ideally on a webpage? Specifically, given the data, it would work something like this: the user selects two of the 3 variables. He then sees a "stack" of tables, one for each value of the third variable (cross-sections, in other words). By selecting the appropriate table from the stack, he can see the (i,j,k) value he wants. The "technology" for displaying such a thing (stacked tables, rotation, etc.) already exists, so this seems the sort of thing that someone ought to have written already. To be clear: I don't need sophisticated graphics necessarily, just the ability to select from cross-sections of variables. But I have no experience with (say, for displaying on a webpage) what web gadgets exist, so I'm clueless how to even search for one. (Google searches like "multidimensional data visualization" didn't throw up anything useful. Google Spreadsheets can do a few kinds of charts which can be embedded on a webpage, but I cannot tell if this is one of them.) [I can imagine how it ought to work for higher dimensions. For four-dimensions, instead of selecting just a stack, you'd first select an (i,j) from an "outer table", which would show all (k,l) values for that (i,j). For higher dimensions, inductively: you select (i,j), and then repeat what you'd do with 2 fewer dimensions.] So has this been written? Is this easy to write? Where ought one to look for such a thing?

    Read the article

  • Multidimensional data table?

    - by ShreevatsaR
    [Apologies if this sort of question is off-topic for SuperUser. Please redirect to the right place if so.] There is a 3-dimensional array of values. (That is, instead of a table/2-dimensional array with values in a grid, the values can be thought of in a cube instead.) Is there a way to display this "cube" interactively, ideally on a webpage? Specifically, given the data, it would work something like this: the user selects two of the 3 variables. He then sees a "stack" of tables, one for each value of the third variable (cross-sections, in other words). By selecting the appropriate table from the stack, he can see the (i,j,k) value he wants. The "technology" for displaying such a thing (stacked tables, rotation, etc.) already exists, so this seems the sort of thing that someone ought to have written already. To be clear: I don't need sophisticated graphics necessarily, just the ability to select from cross-sections of variables. But I have no experience with (say, for displaying on a webpage) what web gadgets exist, so I'm clueless how to even search for one. (Google searches like "multidimensional data visualization" didn't throw up anything useful. Google Spreadsheets can do a few kinds of charts which can be embedded on a webpage, but I cannot tell if this is one of them.) [I can imagine how it ought to work for higher dimensions. For four-dimensions, instead of selecting just a stack, you'd first select an (i,j) from an "outer table", which would show all (k,l) values for that (i,j). For higher dimensions, inductively: you select (i,j), and then repeat what you'd do with 2 fewer dimensions.] So has this been written? Is this easy to write? Where ought one to look for such a thing?

    Read the article

  • Import data in Excel that doesn't have a row delimiter, but number of columns is known

    - by Alex B
    So i have this text file that looks something like this: Header1 Header2 Header3 Header4 A1 B1 C1 D1 A2 B2 C2 D2 and so on. When imported, I'd want the data to format itself in 4 columns. I tried the Get External Data from Text, and it successfully imports it, but it doesn't wrap it around, so it just keeps making columns for every space. I'd want it to go on the next line after 4 (in this case) elements have been added. What's the simplest way to achieve this? EDIT: My answer follows, since I'm not yet allowed to answer my own questions yet. The Excel function I needed is called indirect(). Not sure how it actually works though, so hopefully someone can help out with that, but the function call that worked for me is =INDIRECT(ADDRESS((ROW(A1)-1)*4+COLUMN(A1),1)) which i found over here: http://www.ozgrid.com/forum/showthread.php?t=101584&p=456031#post456031 Note: this required me to add the text to excel where i'd get this row full of columns, and then flip it so that i'd have a column full of rows.

    Read the article

  • Creating a test database with copied data *and* its own data

    - by Jordan Reiter
    I'd like to create a test database that each day is refreshed with data from the production database. BUT, I'd like to be able to create records in the test database and retain them rather than having them be overwritten. I'm wondering if there is a simple straightforward way to do this. Both databases run on the same server, so apparently that rules out replication? For clarification, here is what I would like to happen: Test database is created with production data I create some test records that I want to keep running on the test server (basically so I can have example records that I can play with) Next day, the database is completely refreshed, but the records I created that day are retained. Records that were untouched that day are replaced with records from the production database. The complication is if a record in the production database is deleted, I want it to be deleted on the test database too, so I do want to get rid of records in the test database that no longer exist in the production database, unless those records were created within the test database. Seems like the only way to do this would be to have some sort of table storing metadata about the records being created? So for example, something like this: CREATE TABLE MetaDataRecords ( id integer not null primary key auto_increment, tablename varchar(100), action char(1), pk varchar(100) ); DELETE FROM testdb.users WHERE NOT EXISTS (SELECT * from proddb.users WHERE proddb.users.id=testdb.users.id) AND NOT EXISTS (SELECT * from testdb.MetaDataRecords WHERE testdb.MetaDataRecords.pk=testdb.users.pk AND testdb.MetaDataRecords.action='C' AND testdb.MetaDataRecords.tablename='users' );

    Read the article

  • saving data from a failing drive

    - by intuited
    An external 3½" HDD seems to be in danger of failing — it's making ticking sounds when idle. I've acquired a replacement drive, and want to know the best strategy to get the data off of the dubious drive with the best chance of saving as much as possible. There are some directories that are more important than others. However, I'm guessing that picking and choosing directories is going to reduce my chances of saving the whole thing. I would also have to mount it, dump a file listing, and then unmount it in order to be able to effectively prioritize directories. Adding in the fact that it's time-consuming to do this, I'm leaning away from this approach. I've considered just using dd, but I'm not sure how it would handle read errors or other problems that might prevent only certain parts of the data from being rescued, or which could be overcome with some retries, but not so many that they endanger other parts of the drive from being saved. I guess ideally it would do a single pass to get as much as possible and then go back to retry anything that was missed due to errors. Is it possible that copying more slowly — e.g. pausing every x MB/GB — would be better than just running the operation full tilt, for example to avoid any overheating issues? For the "where is your backup" crowd: this actually is my backup drive, but it also contains some non-critical and bulky stuff, like music, that aren't backups, i.e. aren't backed up. The drive has not exhibited any clear signs of failure other than this somewhat ominous sound. I did have to fsck a few errors recently — orphaned inodes, incorrect free blocks/inodes counts, inode bitmap differences, zero dtime on deleted inodes; about 20 errors in all. The filesystem of the partition is ext3.

    Read the article

  • Recovering data from an external hard drive

    - by CCallaghan
    I have a WD Elements 2GB hard drive (formatted NTFS). I accidentally kicked out the USB cable while writing data to the disk, and now I can't access most of the data. Although this was ostensibly my backup drive, there is a great deal of important material on there which was only on there. I realise how idiotic this makes me. (So, formatting is not an option.) Things I've tried/information I've gathered: Windows Explorer will recognise the drive itself. However, it will not access most directories therein (and will sometimes crash when exploring). I can access all of the directories through the command line, but the dir command will often report that it can't read any files in most of the directories. The situation was similar when I hooked it up to an Ubuntu machine: the file explorer crashed, but I could access directories - but not files in those directories - via terminal commands. Several files I tried to copy out either resulted in an I/O error being reported or resulted in the command line crashing. The Disk Management utility on Windows reports a healthy disk formatted as NTFS and not RAW. It also indicates the correct amount of space used up and its capacity (so it seems that the files are not deleted). I've tried to run chkdsk, but that hangs on Step 2 (checking indexes) at 74%. Step 1 reported no bad sectors. I tried Recuva, but that didn't seem to work (stalled at 0% for half an hour). I should also note that the disk doesn't seem to be spinning smoothly; it seems to be chopping back, like it's reading the same sector over and over again. I noticed this after I kicked out the cable. Any help would be greatly appreciated. Update: It would seem the problem has taken a turn for the worse. The external hard drive now shows up on my computer as a local disk and is not mountable by Linux.

    Read the article

  • Data recovery on working hard drive

    - by emgee
    So I have a 5 bay hot swap SATA enclosure that's connected to a Silicon Image-based SATA adapter in a computer. It's running XP Pro. There are two 1.5TB hard drives in slots 1 and 2 respectively, set up using RAID 1 using the the Silicon Image utility. There are also two 1TB drives in bays 3 and 4, also set to RAID 1 the same way. The partitions for both RAID arrays are Dynamic partitions. A few days back, there was a bare hard drive that needed some files copied off of, so it was popped it in bay 5, that bay to pass-through, and the copied data off of it. Later, I noticed that my 1.5TB drives no longer showed up in windows. In the Silicon Image utility, the drives showed up fine, no error. However, in Device Manager, it shows the RAID 1 array as uninitialized. It shows up as the right size, etc., but nothing else. There's no sign of anything wrong with either drive, so I'm not sure what happened exactly. I'm not the only one who has access to that computer, so it is possible there is something else done to it that I don't know of. There's quite a lot of data on it still, and if at all possible, I'd prefer to not send it to Ontrack. Does anyone know of software that would restore the partitions, keeping in mind that it's a Windows LDM partition? I have access to a variety of Operating Systems, so something that would work on Mac, Windows or Linux would be acceptable. The programs I usually use are not compatible with LDM.

    Read the article

  • Easily Plotting Multiple Data Series in Excel

    - by John
    I really need help figuring out how to speed up graphing multiple series on a graph. I have seperate devices that give monthly readings for several variables like pressure, temperature, and salinity. Each of these variables is going to be its own graph with devices being the series. My x-axis is going to be the dates that these values were taken. The problem is that it takes ages to do this for each spreadsheet since I have monthly dates from 1950 up to the present and I have about 50 devices in each spreadsheet. I also have graphs for calculated values that are in columns next to them. Each of these devices is going to become a data series in the graph. E.g. In one of my graphs I have all the pressures from the devices and each of the data series' names is the name of the device. I want a fast way to do this. Doing this manually is taking a very long time. Please help! Is there any easier way to do this? It is consistent and the dates all line up. I am just repeating the same clicks over and over again Thank you!

    Read the article

  • VS 2010 Debugger Improvements (BreakPoints, DataTips, Import/Export)

    - by ScottGu
    This is the twenty-first in a series of blog posts I’m doing on the VS 2010 and .NET 4 release.  Today’s blog post covers a few of the nice usability improvements coming with the VS 2010 debugger.  The VS 2010 debugger has a ton of great new capabilities.  Features like Intellitrace (aka historical debugging), the new parallel/multithreaded debugging capabilities, and dump debuging support typically get a ton of (well deserved) buzz and attention when people talk about the debugging improvements with this release.  I’ll be doing blog posts in the future that demonstrate how to take advantage of them as well.  With today’s post, though, I thought I’d start off by covering a few small, but nice, debugger usability improvements that were also included with the VS 2010 release, and which I think you’ll find useful. Breakpoint Labels VS 2010 includes new support for better managing debugger breakpoints.  One particularly useful feature is called “Breakpoint Labels” – it enables much better grouping and filtering of breakpoints within a project or across a solution.  With previous releases of Visual Studio you had to manage each debugger breakpoint as a separate item. Managing each breakpoint separately can be a pain with large projects and for cases when you want to maintain “logical groups” of breakpoints that you turn on/off depending on what you are debugging.  Using the new VS 2010 “breakpoint labeling” feature you can now name these “groups” of breakpoints and manage them as a unit. Grouping Multiple Breakpoints Together using a Label Below is a screen-shot of the breakpoints window within Visual Studio 2010.  This lists all of the breakpoints defined within my solution (which in this case is the ASP.NET MVC 2 code base): The first and last breakpoint in the list above breaks into the debugger when a Controller instance is created or released by the ASP.NET MVC Framework. Using VS 2010, I can now select these two breakpoints, right-click, and then select the new “Edit labels…” menu command to give them a common label/name (making them easier to find and manage): Below is the dialog that appears when I select the “Edit labels” command.  We can use it to create a new string label for our breakpoints or select an existing one we have already defined.  In this case we’ll create a new label called “Lifetime Management” to describe what these two breakpoints cover: When we press the OK button our two selected breakpoints will be grouped under the newly created “Lifetime Management” label: Filtering/Sorting Breakpoints by Label We can use the “Search” combobox to quickly filter/sort breakpoints by label.  Below we are only showing those breakpoints with the “Lifetime Management” label: Toggling Breakpoints On/Off by Label We can also toggle sets of breakpoints on/off by label group.  We can simply filter by the label group, do a Ctrl-A to select all the breakpoints, and then enable/disable all of them with a single click: Importing/Exporting Breakpoints VS 2010 now supports importing/exporting breakpoints to XML files – which you can then pass off to another developer, attach to a bug report, or simply re-load later.  To export only a subset of breakpoints, you can filter by a particular label and then click the “Export breakpoint” button in the Breakpoints window: Above I’ve filtered my breakpoint list to only export two particular breakpoints (specific to a bug that I’m chasing down).  I can export these breakpoints to an XML file and then attach it to a bug report or email – which will enable another developer to easily setup the debugger in the correct state to investigate it on a separate machine.  Pinned DataTips Visual Studio 2010 also includes some nice new “DataTip pinning” features that enable you to better see and track variable and expression values when in the debugger.  Simply hover over a variable or expression within the debugger to expose its DataTip (which is a tooltip that displays its value)  – and then click the new “pin” button on it to make the DataTip always visible: You can “pin” any number of DataTips you want onto the screen.  In addition to pinning top-level variables, you can also drill into the sub-properties on variables and pin them as well.  Below I’ve “pinned” three variables: “category”, “Request.RawUrl” and “Request.LogonUserIdentity.Name”.  Note that these last two variable are sub-properties of the “Request” object.   Associating Comments with Pinned DataTips Hovering over a pinned DataTip exposes some additional UI within the debugger: Clicking the comment button at the bottom of this UI expands the DataTip - and allows you to optionally add a comment with it: This makes it really easy to attach and track debugging notes: Pinned DataTips are usable across both Debug Sessions and Visual Studio Sessions Pinned DataTips can be used across multiple debugger sessions.  This means that if you stop the debugger, make a code change, and then recompile and start a new debug session - any pinned DataTips will still be there, along with any comments you associate with them.  Pinned DataTips can also be used across multiple Visual Studio sessions.  This means that if you close your project, shutdown Visual Studio, and then later open the project up again – any pinned DataTips will still be there, along with any comments you associate with them. See the Value from Last Debug Session (Great Code Editor Feature) How many times have you ever stopped the debugger only to go back to your code and say: $#@! – what was the value of that variable again??? One of the nice things about pinned DataTips is that they keep track of their “last value from debug session” – and you can look these values up within the VB/C# code editor even when the debugger is no longer running.  DataTips are by default hidden when you are in the code editor and the debugger isn’t running.  On the left-hand margin of the code editor, though, you’ll find a push-pin for each pinned DataTip that you’ve previously setup: Hovering your mouse over a pinned DataTip will cause it to display on the screen.  Below you can see what happens when I hover over the first pin in the editor - it displays our debug session’s last values for the “Request” object DataTip along with the comment we associated with them: This makes it much easier to keep track of state and conditions as you toggle between code editing mode and debugging mode on your projects. Importing/Exporting Pinned DataTips As I mentioned earlier in this post, pinned DataTips are by default saved across Visual Studio sessions (you don’t need to do anything to enable this). VS 2010 also now supports importing/exporting pinned DataTips to XML files – which you can then pass off to other developers, attach to a bug report, or simply re-load later. Combined with the new support for importing/exporting breakpoints, this makes it much easier for multiple developers to share debugger configurations and collaborate across debug sessions. Summary Visual Studio 2010 includes a bunch of great new debugger features – both big and small.  Today’s post shared some of the nice debugger usability improvements. All of the features above are supported with the Visual Studio 2010 Professional edition (the Pinned DataTip features are also supported in the free Visual Studio 2010 Express Editions)  I’ll be covering some of the “big big” new debugging features like Intellitrace, parallel/multithreaded debugging, and dump file analysis in future blog posts.  Hope this helps, Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

  • Core Data Migration - "Can't add source store" error

    - by Tofrizer
    Hi, In my iPhone app I'm using Core Data and I've made changes to my data model that cannot be automatically migrated over (i.e. added new relationships). I added the data model version (Design - Data Model - Add Model Version) and applied my new data model changes to the new version 2. I then created a mapping object model and set the Source and Destination models to their correct data models (old and new respectively). When I run the app and call the persistentStoreCoordinator, my app barfs with the following: 2010-02-27 02:40:30.922 XXXX[73578:20b] Unresolved error Error Domain=NSCocoaErrorDomain Code=134110 UserInfo=0xfc2240 "Operation could not be completed. (Cocoa error 134110.)", { NSUnderlyingError = Error Domain=NSCocoaErrorDomain Code=134130 UserInfo=0xfbb3a0 "Operation could not be completed. (Cocoa error 134130.)"; reason = "Can't add source store"; } FWIW (not much i think) I've also made the usual code changes in persistentStoreCoordinator to use the NSMigratePersistentStoresAutomaticallyOption and NSInferMappingModelAutomaticallyOption (for future data model changes that can be automatically migrated). More relevantly, my managedObjectModel is created by calling initWithContentsOfURL where the file/resource type is "momd". I've tried updating both the source and destination model in the mapping model (Design - Mapping Model - Update XXX Model) as well as deleted the mapping model and recreated it. I've cleaned and re-built but all to no avail. I still get the above error message. Any pointers/thoughts on how I can further debug or resolve this problem please? I haven't posted any code snippets because this feels much more like a build environment issue (and my code is very standard - just the usual core data code to handle migrations using a mapping model but I'm happy to show the code if it helps). Appreciate any help. Thanks

    Read the article

  • Java errors on Lotus Domino Designer Client 8.5.1

    - by ajcooper
    I have a clean install of Lotus Notes 8.5.1 (now with FP3) and I'm getting the following errors in Designer. This is with a new database with a couple of forms and views. I'm finding this is typical across all databases. Is there something I need to install/configure etc.? I'm not new to Notes, but I'm new to 8.5 Thanks Aidan Description Resource Path Location Type Cannot resolve plug-in: org.eclipse.core.runtime plugin.xml TestAgent.nsf line 9 Plug-in Problem Cannot resolve plug-in: org.eclipse.ui plugin.xml TestAgent.nsf line 8 Plug-in Problem Cannot resolve plug-in: com.ibm.commons plugin.xml TestAgent.nsf line 10 Plug-in Problem Cannot resolve plug-in: com.ibm.commons.vfs plugin.xml TestAgent.nsf line 12 Plug-in Problem Cannot resolve plug-in: com.ibm.commons.xml plugin.xml TestAgent.nsf line 11 Plug-in Problem Cannot resolve plug-in: com.ibm.designer.runtime plugin.xml TestAgent.nsf line 15 Plug-in Problem Cannot resolve plug-in: com.ibm.designer.runtime.directory plugin.xml TestAgent.nsf line 14 Plug-in Problem Cannot resolve plug-in: com.ibm.jscript plugin.xml TestAgent.nsf line 13 Plug-in Problem Cannot resolve plug-in: com.ibm.notes.java.api plugin.xml TestAgent.nsf line 20 Plug-in Problem Cannot resolve plug-in: com.ibm.xsp.core plugin.xml TestAgent.nsf line 16 Plug-in Problem Cannot resolve plug-in: com.ibm.xsp.core plugin.xml TestAgent.nsf line 21 Plug-in Problem Cannot resolve plug-in: com.ibm.xsp.designer plugin.xml TestAgent.nsf line 18 Plug-in Problem Cannot resolve plug-in: com.ibm.xsp.designer plugin.xml TestAgent.nsf line 22 Plug-in Problem Cannot resolve plug-in: com.ibm.xsp.domino plugin.xml TestAgent.nsf line 19 Plug-in Problem Cannot resolve plug-in: com.ibm.xsp.domino plugin.xml TestAgent.nsf line 23 Plug-in Problem Cannot resolve plug-in: com.ibm.xsp.extsn plugin.xml TestAgent.nsf line 17 Plug-in Problem Cannot resolve plug-in: com.ibm.xsp.extsn plugin.xml TestAgent.nsf line 24 Plug-in Problem Cannot resolve plug-in: com.ibm.xsp.rcp plugin.xml TestAgent.nsf line 25 Plug-in Problem

    Read the article

  • iPhone and Core Data: how to retain user-entered data between updates?

    - by Shaggy Frog
    Consider an iPhone application that is a catalogue of animals. The application should allow the user to add custom information for each animal -- let's say a rating (on a scale of 1 to 5), as well as some notes they can enter in about the animal. However, the user won't be able to modify the animal data itself. Assume that when the application gets updated, it should be easy for the (static) catalogue part to change, but we'd like the (dynamic) custom user information part to be retained between updates, so the user doesn't lose any of their custom information. We'd probably want to use Core Data to build this app. Let's also say that we have a previous process already in place to read in animal data to pre-populate the backing (SQLite) store that Core Data uses. We can embed this database file into the application bundle itself, since it doesn't get modified. When a user downloads an update to the application, the new version will include the latest (static) animal catalogue database, so we don't ever have to worry about it being out of date. But, now the tricky part: how do we store the (dynamic) user custom data in a sound manner? My first thought is that the (dynamic) database should be stored in the Documents directory for the app, so application updates don't clobber the existing data. Am I correct? My second thought is that since the (dynamic) user custom data database is not in the same store as the (static) animal catalogue, we can't naively make a relationship between the Rating and the Notes entities (in one database) and the Animal entity (in the other database). In this case, I would imagine one solution would be to have an "animalName" string property in the Rating/Notes entity, and match it up at runtime. Is this the best way to do it, or is there a way to "sync" two different databases in Core Data?

    Read the article

  • How to File Transfer client to Server?

    - by Phsika
    i try to recieve a file from server but give me error on server.Start() ERROR : In a manner not permitted by the access permissions to access a socket was attempted to How can i solve it? private void btn_Recieve_Click(object sender, EventArgs e) { TcpListener server = null; // Set the TcpListener on port 13000. Int32 port = 13000; IPAddress localAddr = IPAddress.Parse("192.168.1.201"); // TcpListener server = new TcpListener(port); server = new TcpListener(localAddr, port); // Start listening for client requests. server.Start(); // Buffer for reading data Byte[] bytes = new Byte[277577]; String data; data = null; // Perform a blocking call to accept requests. // You could also user server.AcceptSocket() here. TcpClient client = server.AcceptTcpClient(); NetworkStream stream = client.GetStream(); int i; i = stream.Read(bytes, 0, 277577); BinaryWriter writer = new BinaryWriter(File.Open("GoodLuckToMe.jpg", FileMode.Create)); writer.Write(bytes); writer.Close(); client.Close(); }

    Read the article

  • What are the responsibilities of the data layer?

    - by alimac83
    I'm working on a project where I had to add a data layer to my application. I've always thought that the data layer is purely responsible for CRUD functions ie. shouldn't really contain any logic but should simply retrieve data for the business layer to manipulate. However I'm a little confused with my project because I'm not sure whether I've structured my app correctly for this scenario. Basically I'm trying to retrieve a list of products from the database that fall within a certain pricing threshold. At the moment I have a function in my data layer that basically returns all products where price min threshold and price < max threshold. But it got me thinking that maybe this is incorrect. Should the data layer simply return a list of ALL products and then the business logic do the filtering? I'm pretty confused over whether the data layer should simply provide methods that allow the business layer to get raw data or whether it should be responsible for getting filtered data too? If anyone has an article or something explaining this in detail it'd be very helpful. Thanks

    Read the article

  • How to store and remove dynamically and automatic variable of generic data type in custum list data

    - by Vineel Kumar Reddy
    Hi I have created a List data structure implementation for generic data type with each node declared as following. struct Node { void *data; .... .... } So each node in my list will have pointer to the actual data(generic could be anything) item that should be stored in the list. I have following signature for adding a node to the list AddNode(struct List *list, void* eledata); the problem is when i want to remove a node i want to free even the data block pointed by *data pointer inside the node structure that is going to be freed. at first freeing of datablock seems to be straight forward free(data) // forget about the syntax..... But if data is pointing to a block created by malloc then the above call is fine....and we can free that block using free function int *x = (int*) malloc(sizeof(int)); *x = 10; AddNode(list,(void*)x); // x can be freed as it was created using malloc what if a node is created as following int x = 10; AddNode(list,(void*)&x); // x cannot be freed as it was not created using malloc Here we cannot call free on variable x!!!! How do i know or implement the functionality for both dynamically allocated variables and static ones....that are passed to my list.... Thanks in advance...

    Read the article

  • Approach for caching data from data logger

    - by filip-fku
    Greetings, I've been working on a C#.NET app that interacts with a data logger. The user can query and obtain logs for a specified time period, and view plots of the data. Typically a new data log is created every minute and stores a measurement for a few parameters. To get meaningful information out of the logger, a reasonable number of logs need to be acquired - data for at least a few days. The hardware interface is a UART to USB module on the device, which restricts transfers to a maximum of about 30 logs/second. This becomes quite slow when reading in the data acquired over a number of days/weeks. What I would like to do is improve the perceived performance for the user. I realize that with the hardware speed limitation the user will have to wait for the full download cycle at least the first time they acquire a larger set of data. My goal is to cache all data seen by the app, so that it can be obtained faster if ever requested again. The approach I have been considering is to use a light database, like SqlServerCe, that can store the data logs as they are received. I am then hoping to first search the cache prior to querying a device for logs. The cache would be updated with any logs obtained by the request that were not already cached. Finally my question - would you consider this to be a good approach? Are there any better alternatives you can think of? I've tried to search SO and Google for reinforcement of the idea, but I mostly run into discussions of web request/content caching. Thanks for any feedback!

    Read the article

  • Manually wiring up unobtrusive jquery validation client-side without Model/Data Annotations, MVC3

    - by cmorganmcp
    After searching and experimenting for 2 days I relent. What I'd like to do is manually wire up injected html with jquery validation. I'm getting a simple string array back from the server and creating a select with the strings as options. The static fields on the form are validating fine. I've been trying the following: var dates = $("<select id='ShiftDate' data-val='true' data-val-required='Please select a date'>"); dates.append("<option value=''>-Select a Date-</option>"); for (var i = 0; i < data.length; i++) { dates.append("<option value='" + data[i] + "'>" + data[i] + "</option>"); } $("fieldset", addShift).append($("<p>").append("<label for='ShiftDate'>Shift Date</label>\r").append(dates).append("<span class='field-validation-valid' data-valmsg-for='ShiftDate' data-valmsg-replace='true'></span>")); // I tried the code below as well instead of adding the data-val attributes and span manually with no luck dates.rules("add", { required: true, messages: { required: "Please select a date" } }); // Thought this would do it when I came across several posts but it didn't $.validator.unobtrusive.parse(dates.closest("form")); I know I could create a view model ,decorate it with a required attribute, create a SelectList server-side and send that, but it's more of a "how would I do this" situation now. Can anyone shed light on why the above code wouldn't work as I expect? -chad

    Read the article

  • Retain numerical precision in an R data frame?

    - by David
    When I create a dataframe from numeric vectors, R seems to truncate the value below the precision that I require in my analysis: data.frame(x=0.99999996) returns 1 (see update 1) I am stuck when fitting spline(x,y) and two of the x values are set to 1 due to rounding while y changes. I could hack around this but I would prefer to use a standard solution if available. example Here is an example data set d <- data.frame(x = c(0.668732936336141, 0.95351462456867, 0.994620622127435, 0.999602102672081, 0.999987126195509, 0.999999955814133, 0.999999999999966), y = c(38.3026509783688, 11.5895099585560, 10.0443344234229, 9.86152339768516, 9.84461434575695, 9.81648333804257, 9.83306725758297)) The following solution works, but I would prefer something that is less subjective: plot(d$x, d$y, ylim=c(0,50)) lines(spline(d$x, d$y),col='grey') #bad fit lines(spline(d[-c(4:6),]$x, d[-c(4:6),]$y),col='red') #reasonable fit Update 1 Since posting this question, I realize that this will return 1 even though the data frame still contains the original value, e.g. > dput(data.frame(x=0.99999999996)) returns structure(list(x = 0.99999999996), .Names = "x", row.names = c(NA, -1L), class = "data.frame") Update 2 After using dput to post this example data set, and some pointers from Dirk, I can see that the problem is not in the truncation of the x values but the limits of the numerical errors in the model that I have used to calculate y. This justifies dropping a few of the equivalent data points (as in the example red line).

    Read the article

  • Data not transfred from form to mysql table (updating of data is not happening)

    - by Jimson
    Hi all and thanks in advance to all for this I tired and was unable to find the answer i am looking for an answer. my problem is that i am unable to update the values enterd in the form. I have attached all the files i'm using MYSQL database to fetch data. what happens is that i'm able to add and delete records from form using ajax and PHP scripts to MYSQL database, but i am not able to update data which was retrived from database. the file structure is as follows index.php is a file with ajax functions where it displays form for adding new data to MYSQL using save.php file and list of all records are view without refrishing page (calling load-list.php to view all records from index.php works fine, and save.php to save data from form) - *Delete*is an ajax function called from index.php to delete record from mysql database (function calling delete.php works fine) - Update is an ajax function called from index.php to update data using update-form.php by retriving specific record from mysql tabel, (works fine) Problem lies in updating data from update-form.php to update.php (in which update query is wrriten for mysql) i had tried in many ways at last i had figured out that data is not being transfred from update-form.php to update.php there is a small problem in jquery ajax function where it is not transfering data to update.php page. can any one correct this ????? i will be greatfull to them..... please find the link below for all files link to get my form files

    Read the article

  • Creation of model in core data on the fly

    - by user1740045
    How can we create a model in core data on the fly? I.e getting the schema of database from somewhere and then creating a Core Data Object graph? *QuesTion:* Yes thats fine, agreed with all the advantages. But, can anybody can tell practically, what is the benefit of integrating Core Data into project instead of using SQL directly. 1.No need to write SQL boiler plate code [but need to learn Core Data Model (steep curve)] 2.WE can undo and redo changes [but practically who needs it] 3.we can migrate to another schema [that can be done by SQLite as well jus need to add another field into table] 4.For say aggregation on some field in table,in Core Data we need to loop through Core Data Objects whereas in SQLite we need to first write SQLite Boiler Plate Code and then the basic aggregation SQL query,which is easy to write,only length of code will increase...But in case of Core Data (need to learn a lot). So apart from reducing the length of Code,does it actually adds value to project? or in terms of Memory Efficiency,Performance,etc.. PS: If anybody has actualy worked on Core Data(Model Creation On the Fly) , if possible share and gve pointers..thanks!

    Read the article

  • MSDeploy doesn't deploy to remote server using MSBuild and Visual Studio 2010

    - by user317762
    I'm currently running Visual Studio Team System 2010 RC and I'm trying to get the Build Service setup to build my solution and deploy 3 web applications in it. I've created a custom build configuration called Integration and I've setup the "IIS Web site/application name to use on the destination server" on the Package/Publish tab of the Properties for each of the web applications. In my Build Definition I've set the following arguments: /p:DeployOnBuild=True /p:DeployTarget=MSDeployPublish /p:MSDeployPublishMethod=InProc /p:MsDeployServiceUrl=http://my-server-name:8172/msdeploy.axd /p:EnablePackageProcessLoggingAndAssert=True However, when I run the build I get the following error, for all three web applications: Updating setAcl (RightContent). C:\Program Files\MSBuild\Microsoft\VisualStudio\v10.0\Web\Microsoft.Web.Publishing.targets(3481,5): error : Web deployment task failed. (Attempted to perform an unauthorized operation.) I don't think this is my actual problem though. This error is occuring after the following entry in the log: Updating setAcl This is what's causing the error message, but it appears that MSDeploy is trying to deploy to the local IIS on the Build server, not the server I specified with the MsDeployServiceUrl parameter. After looking at the targets file at C:\Program Files\MSBuild\Microsoft\VisualStudio\v10.0\Web\Microsoft.Web.Publishing.targets, I added the EnablePackageProcessLoggingAndAssert, which adds extra logging. The log shows an emptry string for the value of MsDeployServiceUrl. I also noticed in the target that MsDeployServiceUrl has a lowercase s, which is somewhat confusing because the task name MSDeployPublish has an uppercase S. I tried using it using uppercase, then again using lowercase, but neither worked. A couple other things to note: My build service is running as NETWORK SERVICE. The server I'm trying to deploy to is on another domain. I also tried adding /p:username=mydomain\myusername /p:password=mypassword to the MSBuild paramter list, but that didn't help. Does anyone know if I'm supplying the correct parameters? Or provide me with the correct ones? Thanks

    Read the article

  • Using parameters in reports for VIsual Studio 2008

    - by Jim Thomas
    This is my first attempt to create a Visual Studio 2008 report using parameters. I have created the dataset and the report. If I run it with a hard-coded filter on a column the report runs fine. When I change the filter to '?' I keep getting this error: No overload for method 'Fill' takes '1' argument Obviously I am missing some way to connect the parameter on the dataset to a report parameter. I have defined a report parameter using the Report/Report Parameter screen. But how does that report parameter get tied to the dataset table parameter? Is there a special naming convention for the parameter? I have Googled this a half dozen times and read the msdn documentation but the examples all seem to use a different approach (like creating a SQL query rather then a table based dataset) or entering the parameter name as "=Parameters!name.value" but I can't figure out where to do that. One msdn example suggestted I needed to create some C# code using a SetParameters() method to make the connection. Is that how it is done? If anyone can recommend a good walk-through I'd appreciate it. Edit: After more reading it appears I don't need report parameters at all. I am simply trying to add a parameter to the database query. So I would create a text box on the form, get the user's input, then apply that parameter programmatically to the fill() argument list. The report parameter on the other hand is an ad-hoc value generally entered by a user that you want to appear on the report. But there is no relationship between report parameters and query/dataset parameters. Is that correct?

    Read the article

< Previous Page | 94 95 96 97 98 99 100 101 102 103 104 105  | Next Page >