Search Results

Search found 69877 results on 2796 pages for 'ibm data studio'.

Page 127/2796 | < Previous Page | 123 124 125 126 127 128 129 130 131 132 133 134  | Next Page >

  • Core Data vs. SQLitePersistentObjects

    - by Macatomy
    I'm creating an iPhone app and I'm trying to choose between 2 solutions for a persistent store. Core Data, or SQLitePersistentObjects. Basically, all my app needs is a way to store an array of model objects and then load them again to display in a UITableView. Its nothing too complicated. Core Data seems to have a much higher learning curve than the simple to use SQLitePersistentObjects. Are there any obvious benefits of using Core Data over SQLitePersistentObjects in my case?

    Read the article

  • C# 2008 Express v C# 2010 Express

    - by Andy
    Can anybody post a link to a comparison chart, or even to a duplicated question here on SO, for these two products? Plenty of info on what is missing between Express and Pro for example, but I'm struggling to find much on Express v Express. Is the only real difference the ability to develop apps for .NET 4.0? I'm developing WinForms apps, targetting .NET 2.0 at the moment, so are there any benefits for me in changing to 2010 Express? Unfortunately, upgrading to VS Professional or such is not an option for me right now, so I'm stuck with the hamstrung versions. Thanks.

    Read the article

  • VS2010 doesn't show project's CodeAnalysis page

    - by Christoph Ungersböck
    When I try to open a project's CodeAnalysis page I get the error "An error occurred trying to load the page. Only TrueType fonts are supported. This is not a TrueType font." I also get a very simmilar ExceptionBox when I want to open the solution's propertywindow: "Only TrueType fonts are supported. This is not a TrueType font." Has anyone experiences with this error?

    Read the article

  • How to return plain XML from ADO.NET data service

    - by KHALIL
    Hi, I was wondering how to return plain XML from ADO.net data services I have exposed an ADO.net data service to different DEPARTMENTS in our company who are not so technical. The data returned is ATOM FEED which is kind a hard to read / interpret with its format, too much information is returned people from various departments would execute different queries ( HTTP Request) and i wanted them to display simple XML or atleast something more user friendly like HTML I have tried ACCEPT attribute of the request to be plain XML and it still returns ATOM Thanks -- Khalil

    Read the article

  • Learning Visual C++ 2008 and C++ at the same time? Any resources to recommend?

    - by Javed Ahamed
    Hey guys, I am trying to learn Visual C++ 2008 and C++ at the same time to get involved with sourcemod, a server side modding tool for valve games. However I have never touched Visual C++ or C++ in general, and doing some preliminary research I am quite confused on these different versions of C++ (mfc, cli, win32), and why a lot of people seem to hate Visual C++ and use something like Borland instead. I really learn visually, and have used videos from places like Lynda.com with great success. I was wondering if anyone had any exceptional resources they had come across to teach Visual C++ 2k8, with its intricacies and setting up the IDE along with C++ at the same time. Books would be nice, but videos would be preferred, and I don't mind paying for resources. Thanks in advance!

    Read the article

  • Build ATOM Feed Reader for ADO.net DATA Services feed

    - by khalil
    Hi, I have built an ADO.net data services to expose data in a SQL server database as XML. What I want to be able to do is create a feed reader for this ATOM feed in .net or may be a user control which subscribes to this URI based ATOM Feed from ADO.net data service & publishes the latest information on our website

    Read the article

  • jvm version for Websphere 6.1.0.23on Solaris

    - by dr jerry
    Hi I'm at big financial institute and we've an application running on Websphere 6.1. on Solaris. Due to MQ Connectivity we had to install fixpack 6.1.0.23. Unfortunately this broke an ejb (1.1) which is still there as legacy (Test missed it). [3/23/10 11:33:18:703 CET] 00000055 EJBContainerI E WSVR0068E: Attempt to start EnterpriseBean EventRisk_1.0.0#EventRiskEJB.jar#PolicyDataManager failed with exception: java.lang.NoSuchMethodError: com.ibm.ejs.csi.ResRefListImpl.(Lorg/eclipse/jst/j2ee/ejb/EnterpriseBean;Lcom/ibm/ejs/models/base/bindings/ejbbnd/EnterpriseBeanBinding;Lcom/ibm/ejs/models/base/extensions/ejbext/EnterpriseBeanExtension;)V at com.ibm.ws.metadata.ejb.EJBMDOrchestrator.finishBMDInit(EJBMDOrchestrator.java:1364) at com.ibm.ws.runtime.component.EJBContainerImpl.finishDeferredBeanMetaData(EJBContainerImpl.java:4829) at com.ibm.ws.runtime.component.EJBContainerImpl$3.run(EJBContainerImpl.java:4631) at java.security.AccessController.doPrivileged(Native Method) at com.ibm.ws.security.util.AccessController.doPrivileged(AccessController.java:125) at com.ibm.ws.runtime.component.EJBContainerImpl.initializeDeferredEJB(EJBContainerImpl.java:4627) at com.ibm.ejs.container.HomeOfHomes.getHome(HomeOfHomes.java:390) at com.ibm.ejs.container.HomeOfHomes.internalCreateWrapper(HomeOfHomes.java:938) at com.ibm.ejs.container.EJSContainer.createWrapper(EJSContainer.java:4783) at com.ibm.ejs.container.WrapperManager.faultOnKey(WrapperManager.java:545) at com.ibm.ejs.util.cache.Cache.findAndFault(Cache.java:498) at com.ibm.ejs.container.WrapperManager.keyToObject(WrapperManager.java:489) We cannot reproduce the issue on our desktop boxes (it all works fine there) and we do not have direct access to our the Solaris machines (dependent on the deployment department) we do suspect a discrepancy on the jvm but we're not sure. My question is two fold: can you confirm IBM's statement that fixpack 6.1.0.23 for solaris indeed runs on jvm 1.5.0_17b04 our installation tells us ./java -version java version "1.5.0_13" But deploy department is not eager to investigate. Do you see some other solution, apart from hiring big blue's con$ultancy? kind regards, Jeroen.

    Read the article

  • Vertical mouse scrolling wheel not working in VS 2010 Ultimate

    - by Robert
    The title says it all. I tried it with two different mice- both of which work perfectly fine in all other applications. The mouse is MS Intellimouse Optical. I even tried to speed up the vertical scroll through the mouse utility and still nothing. It barely moves the code a tiny bit and then it stops. I had no problems at all with VS 2008 which is concurrently installed in the same machine. Am I the only one having this???

    Read the article

  • visual web developer resize "all images" together?

    - by ahmed
    I have a weird problem with images in visual web developer, I cant change my images properties from properties panel (my changes don't take effect) and all images in my website have gotten the same size when I resize one image (by dragging the border), all images get that new size? any idea?

    Read the article

  • Parse and transform XML with missing elements into table structure

    - by dnlbrky
    I'm trying to parse an XML file. A simplified version of it looks like this: x <- '<grandparent><parent><child1>ABC123</child1><child2>1381956044</child2></parent><parent><child2>1397527137</child2></parent><parent><child3>4675</child3></parent><parent><child1>DEF456</child1><child3>3735</child3></parent><parent><child1/><child3>3735</child3></parent></grandparent>' library(XML) xmlRoot(xmlTreeParse(x)) ## <grandparent> ## <parent> ## <child1>ABC123</child1> ## <child2>1381956044</child2> ## </parent> ## <parent> ## <child2>1397527137</child2> ## </parent> ## <parent> ## <child3>4675</child3> ## </parent> ## <parent> ## <child1>DEF456</child1> ## <child3>3735</child3> ## </parent> ## <parent> ## <child1/> ## <child3>3735</child3> ## </parent> ## </grandparent> I'd like to transform the XML into a data.frame / data.table that looks like this: parent <- data.frame(child1=c("ABC123",NA,NA,"DEF456",NA), child2=c(1381956044, 1397527137, rep(NA, 3)), child3=c(rep(NA, 2), 4675, 3735, 3735)) parent ## child1 child2 child3 ## 1 ABC123 1381956044 NA ## 2 <NA> 1397527137 NA ## 3 <NA> NA 4675 ## 4 DEF456 NA 3735 ## 5 <NA> NA 3735 If each parent node always contained all of the possible elements ("child1", "child2", "child3", etc.), I could use xmlToList and unlist to flatten it, and then dcast to put it into a table. But the XML often has missing child elements. Here is an attempt with incorrect output: library(data.table) ## Flatten: dt <- as.data.table(unlist(xmlToList(x)), keep.rownames=T) setnames(dt, c("column", "value")) ## Add row numbers, but they're incorrect due to missing XML elements: dt[, row:=.SD[,.I], by=column][] column value row 1: parent.child1 ABC123 1 2: parent.child2 1381956044 1 3: parent.child2 1397527137 2 4: parent.child3 4675 1 5: parent.child1 DEF456 2 6: parent.child3 3735 2 7: parent.child3 3735 3 ## Reshape from long to wide, but some value are in the wrong row: dcast.data.table(dt, row~column, value.var="value", fill=NA) ## row parent.child1 parent.child2 parent.child3 ## 1: 1 ABC123 1381956044 4675 ## 2: 2 DEF456 1397527137 3735 ## 3: 3 NA NA 3735 I won't know ahead of time the names of the child elements, or the count of unique element names for children of the grandparent, so the answer should be flexible.

    Read the article

  • Can Core Data be used on Linux?

    - by glenc
    This might be a stupid question, but I was wondering whether or not you can use the Core Data libraries on Linux at all? I'm planning how to build the server side of an iPhone app that I'm working on, and have found that you can use PyObjC to get access to Core Data in a Python environment, e.g. use Core Data in a TurboGears web application. At this point I'm thinking that you would have to run the web server on Mac OSX, because I can't find any evidence on the internet that you can access the Objective-C libraries on Linux. I've always written webapps on Linux but will obviously make the jump to an OSX server if it allows me to use the same datastore implementation on the iPhone and the server, the only job remaining being the Core Data <- Web Services XML translation that has to happen on the wire.

    Read the article

  • Windows CE 5.0 image building: Possible without Platform Builder?

    - by developer
    Is it possible to create Windows CE 5.0 images (ie: nk.bin) from VS2005/VS2008 without using Platform Builder? If so, how? Can a vendor BSP for WinCE 5 be loaded into VS2005/2008? Are there the parts to do this available for download from Microsoft (ie: the SDK), or must you buy the special bits (a la PB) from a "special distributor"? I know it is possible to build binaries (.dll, .exe) for WinCE 5.0 using VS, my question is about creating entire bootable CE 5.0 images for embedded platforms.

    Read the article

  • Deploying Asp.Net MVC 2 /C# 4.0 application on IIS 6

    - by Mose
    Hi, I got a problem migrating from VS.Net 2008 / MVC 1 to VS.NET 2010 (+C# 4.0) / MVC 2 The web.config has been updated, the site runs well in Cassini, but my problem now is deploying on IIS 6. I updated the web site to run using ASP.Net 4, but whatever URL I try, I always have a 404 error. It's as if the routing was not taken into account (yes, the wildcard mapping has been done). I do not understand this mess and could not google anything interesting... Thanks for your suggestions ! O.

    Read the article

  • Data Collection (Offline - no internet) and then syncing it to generate reports from server

    - by Nishant
    So, I have a new project I am planning on taking, and needed to know what skills will be required to achieve this project. The project is to do intensive data collection in the field where they don't have internet access. As part of the data collection, images will be uploaded as part of the data collection which will have to be resized, etc. Once the data collection occurs, this data needs to be consolidated and reported on. I am thinking there are two ways of generating the report. 1. Into a PDF that can be designed. 2. Is there a way to generate an executable file (since the PDF will be huge due to multiple images, etc) and the executable file is navigation friendly with drop-downs etc. It might not be an executable file, but could be a web page or some way that this can be delivered to the client in a friendly professional way. The PDF will have to be generated somehow so that it can be printed as a hard copy. What languages and skill sets will I need to accomplish this project?

    Read the article

  • Selecting element by data attribute

    - by zizo
    Is there an easy and straight-forward method to select elements based on their data attribute? For example, select all anchors that has data attribute named customerID which has value of 22. I am kind of hesitant to use rel or other attributes to store such information, but I find it much harder to select an element based on what data is stored in it. Thanks!

    Read the article

  • Problem with WiX Votive 3.0 preprocessor

    - by Leith Bade
    I have just started using WiX for the first time. I added a WiX Votive project to my existing C project. To automatically select the correct source folder for the binaries add used the following: <Directory Id="INSTALLLOCATION" Name="Trapeze Capture For Objective" FileSource="$(var.CaptureForObjective.TargetDir)"> That results in the following error: 1C:\code\CaptureForObjective\Installer\Product.wxs(10,0): error CNDL0150: Undefined preprocessor variable '$(var.CaptureForObjective.TargetDir)'. The C project is called CaptureForObjective, and the WiX project is called Installer. What do I need to do to get this to work?

    Read the article

  • Producing 64-bit builds on Windows with free software

    - by pauldoo
    Hi, I have a C++ project that I've been developing in Microsoft Visual C++ 2008 Express Edition. It has come to the point that I'd like to port to 64-bit and continue development. What is the best way to do this using free software? My thoughts so far: The Express Edition of MSVC doesn't come with 64-bit compilers, so I can install the Windows SDK to get these. I could then port my project files to nmake, and use the IDE just as a tool to debug and invoke my nmake scripts.. The downside to this is that nmake looks very poor. The example towards the end of this tutorial suggests that nmake cannot figure out source file dependences itself, and I don't know of anything equivelant to gcc -M that I could use. Another option might be to use vcbuild from the Windows SDK to produce 64-bit builds from my existing vcproj files. Preliminary investigations show that this doesn't really work, as my project files don't have the 64-bit configurations present. (Perhaps I could fudge this by adding the 64-bit configurations to the vcproj files in a text editor.) A final option might be to give up on MSVC, and port my project to the MinGW/MSYS toolchain.

    Read the article

  • Force File Reload Before Build

    - by Byron Ross
    We have a tool that generates some code (.cs) files that are used to build the project. The tool is run during the pre-build step, but the files are updated in the solution only after the build, which means the build needs to be performed twice to clear the errors after a change to the input. Example: Modify Tool Input File Run Build Tool Runs and changes source file Build Fails Run Build Tool Runs and changes source file (but it doesn's actually change, because the input remains the same) Build Succeeds Any ideas how we can do away with the double build, and still let our tool be run from VS? Thanks guys!

    Read the article

  • Shipping Java code with data baked into the .jar

    - by Andrew
    I need to ship some Java code that has an associated set of data. It's a simulator for a device, and I want to be able to include all of the data used for the simulated records in the one .JAR file. In this case, each simulated record contains four fields (calling party, called party, start of call, call duration). What's the best way to do that? I've gone down the path of generating the data as Java statements, but IntelliJ doesn't seem particularly happy dealing with a 100,000 line Java source file! Is there a smarter way to do this? In the C#/.NET world I'd create the data as a separate file, embed it in the assembly as a resource, and then use reflection to pull that out at runtime and access it. I'm unsure of what the appropriate analogy is in the Java world. FWIW, Java 1.6, shipping for Solaris.

    Read the article

  • Data Mining - Predictive Analysis

    - by IMHO
    We are looking at acquiring Data Mining software to primarily run predictive analysis processes. How does SQL Server Data Mining solution compares to other solutions like SPSS from IBM? Since SQL Server DM is included in SQL Server Enterprise license - what would be the justification to spend extra couple 100K to buy separate software just to do DM?

    Read the article

  • R: convert data.frame columns from factors to characters

    - by Mike Dewar
    Hi, I have a data frame. Let's call him bob: > head(bob) phenotype exclusion GSM399350 3- 4- 8- 25- 44+ 11b- 11c- 19- NK1.1- Gr1- TER119- GSM399351 3- 4- 8- 25- 44+ 11b- 11c- 19- NK1.1- Gr1- TER119- GSM399352 3- 4- 8- 25- 44+ 11b- 11c- 19- NK1.1- Gr1- TER119- GSM399353 3- 4- 8- 25+ 44+ 11b- 11c- 19- NK1.1- Gr1- TER119- GSM399354 3- 4- 8- 25+ 44+ 11b- 11c- 19- NK1.1- Gr1- TER119- GSM399355 3- 4- 8- 25+ 44+ 11b- 11c- 19- NK1.1- Gr1- TER119- I'd like to concatenate the rows of this data frame (this will be another question). But look: > class(bob$phenotype) [1] "factor" Bob's columns are factors. So, for example: > as.character(head(bob)) [1] "c(3, 3, 3, 6, 6, 6)" "c(3, 3, 3, 3, 3, 3)" [3] "c(29, 29, 29, 30, 30, 30)" I don't begin to understand this, but I guess these are indices into the levels of the factors of the columns (of the court of king caractacus) of bob? Not what I need. Strangely I can go through the columns of bob by hand, and do bob$phenotype <- as.character(bob$phenotype) which works fine. And, after some typing, I can get a data.frame whose columns are characters rather than factors. So my question is: how can I do this automatically? How do I convert a data.frame with factor columns into a data.frame with character columns without having to manually go through each column? Bonus question: why does the manual approach work?

    Read the article

< Previous Page | 123 124 125 126 127 128 129 130 131 132 133 134  | Next Page >