Search Results

Search found 77599 results on 3104 pages for 'test data'.

Page 779/3104 | < Previous Page | 775 776 777 778 779 780 781 782 783 784 785 786  | Next Page >

  • WCF binding programatically and adding metadata excange

    - by totem
    Hi, My problem is im trying to enable mex on a service that uses net.tcp binding. that binding is for localhost port 5000, when i want to enable mex on the same port, and have it avilable for http i have to enable the HttpGetEnabled on the Service host. All this works well but when i try to add the binding it fails becuase the binding is "net.tcp://localhost:5000/test". is there a way to enable mex on the same port but with a diffrent uri? Without enableing NetTcpPortSharing. I dont think the code is the issue since i can add the MEX on a diffrent port through the code an it works fine, the question is how to have net.tcp://localhost:5000/test as the WCF tcp based enpoint and net.tcp://localhost:5000/test/mex as the http mex endpoint that gives the WSDL for the TCP endpoint. thanks, Totem

    Read the article

  • Android XMLRPC Fault Code

    - by sameersegal
    Hey, We have been using XMLRPC for android and it was working well until we got our hands dirty with Base64 encoding for byte[] (images) -- (we did base64_string.replace("/","$$") for transmission). We have tried undoing the changes and its looking an XMLRPC error. We are getting the following error in the DDMS: 06-10 23:27:02.970: DEBUG/Test(343): org.xmlrpc.android.XMLRPCFault: XMLRPC Fault: [code 0] 06-10 23:27:02.970: DEBUG/Test(343): at org.xmlrpc.android.XMLRPCClient.callEx(XMLRPCClient.java:308) 06-10 23:27:02.970: DEBUG/Test(343): at org.xmlrpc.android.XMLRPCMethod.run(XMLRPCMethod.java:33) Just before this I checked the body (xml message -- which is perfect) and the response received: 06-10 23:27:02.940: INFO/System.out(343): Response received: org.apache.http.message.BasicHttpResponse@437762f8 Since it the message is not even reaching our cloud, the issue is with XMLRPC for android. Any help will be most appreciated. Thanks Best Sameer

    Read the article

  • How to use WSDL2Java generated files?

    - by vikasde
    I generated the .java files using wsdl2java found in axis2-1.5. Now it generated the files in this folder structure: src/net/mycompany/www/services/ The files in the services folder are: SessionIntegrationStub and SessionIntegrationCallbackHandler. I would like to consume the webservice now. I added the net folder to the CLASSPATH environment variable. My java file now imports the webservice using: import net.mycompany.www.services; public class test { public static void main(String[] args) { SessionIntegrationStub stub = new SessionIntegrationStub(); System.out.println(stub.getSessionIntegration("test")); } } Now when I try to compile this using: javac test.java I get: package net.mycompany.www does not exist. Any idea?

    Read the article

  • Why is WPFToolkit DataGrid so slow when binding?

    - by Schneider
    I have a very simple test application where I have two objects, each with a small collection of items. when I select an object I display its collection in a WPFToolkit DataGrid. The problem is there is a noticeable delay, such that if you press up/down keys to toggle selection between objects you can see it can't keep up. Why is the performance so bad? <Window x:Class="SlowGridBinding.MainWindow" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" xmlns:Controls="clr-namespace:Microsoft.Windows.Controls;assembly=WPFToolkit" Title="MainWindow" Height="350" Width="525"> <StackPanel> <ListBox ItemsSource="{Binding Shops}" DisplayMemberPath="Name" IsSynchronizedWithCurrentItem="True"/> <Controls:DataGrid ItemsSource="{Binding Shops/Vegetables}" AutoGenerateColumns="True"/> </StackPanel> The DataContext is populated with some test classes filled with 50 items of random test data.

    Read the article

  • Simple way of converting server side objects into client side using JSON serialization for asp.net websites

    - by anil.kasalanati
     Introduction:- With the growth of Web2.0 and the need for faster user experience the spotlight has shifted onto javascript based applications built using REST pattern or asp.net AJAX Pagerequest manager. And when we are working with javascript wouldn’t it be much better if we could create objects in an OOAD way and easily push it to the client side.  Following are the reasons why you would push the server side objects onto client side -          Easy availability of the complex object. -          Use C# compiler and rick intellisense to create and maintain the objects but use them in the javascript. You could run code analysis etc. -          Reduce the number of calls we make to the server side by loading data on the pageload.   I would like to explain about the 3rd point because that proved to be highly beneficial to me when I was fixing the performance issues of a major website. There could be a scenario where in you be making multiple AJAX based webrequestmanager calls in order to get the same response in a single page. This happens in the case of widget based framework when all the widgets are independent but they need some common information available in the framework to load the data. So instead of making n multiple calls we could load the data needed during pageload. The above picture shows the scenario where in all the widgets need the common information and then call GetData webservice on the server side. Ofcourse the result can be cached on the client side but a better solution would be to avoid the call completely.  In order to do that we need to JSONSerialize the content and send it in the DOM.                                                                                                                                                                                                                                                                                                                                                                                            Example:- I have developed a simple application to demonstrate the idea and I would explaining that in detail here. The class called SimpleClass would be sent as serialized JSON to the client side .   And this inherits from the base class which has the implementation for the GetJSONString method. You can create a single base class and all the object which need to be pushed to the client side can inherit from that class. The important thing to note is that the class should be annotated with DataContract attribute and the methods should have the Data Member attribute. This is needed by the .Net DataContractSerializer and this follows the opt-in mode so if you want to send an attribute to the client side then you need to annotate the DataMember attribute. So if I didn’t want to send the Result I would simple remove the DataMember attribute. This is default WCF/.Net 3.5 stuff but it provides the flexibility of have a fullfledged object on the server side but sending a smaller object to the client side. Sometimes you may hide some values due to security constraints. And thing you will notice is that I have marked the class as Serializable so that it can be stored in the Session and used in webfarm deployment scenarios. Following is the implementation of the base class –  This implements the default DataContractJsonSerializer and for more information or customization refer to following blogs – http://softcero.blogspot.com/2010/03/optimizing-net-json-serializing-and-ii.html http://weblogs.asp.net/gunnarpeipman/archive/2010/12/28/asp-net-serializing-and-deserializing-json-objects.aspx The next part is pretty simple, I just need to inject this object into the aspx page.   And in the aspx markup I have the following line – <script type="text/javascript"> var data =(<%=SimpleClassJSON  %>);   alert(data.ResultText); </script>   This will output the content as JSON into the variable data and this can be any element in the DOM. And you can verify the element by checking data in the Firebug console.    Design Consideration – If you have a lot of javascripts then you need to think about using Script # and you can write javascript in C#. Refer to Nikhil’s blog – http://projects.nikhilk.net/ScriptSharp Ensure that you are taking security into consideration while exposing server side objects on to client side. I have seen application exposing passwords, secret key so it is not a good practice.   The application can be tested using the following url – http://techconsulting.vpscustomer.com/Samples/JsonTest.aspx The source code is available at http://techconsulting.vpscustomer.com/Source/HistoryTest.zip

    Read the article

  • Why doesn't Apache2::SubProcess spawn my subprocess?

    - by codeholic
    The following script works without errors, but /tmp/test.touch is not being created (even being checked later in the command line). It seems to me as if $r->spawn_proc_prog doesn't spawn a process. What may cause the problem? #!/usr/bin/perl use strict; use warnings; use Apache2::RequestUtil; use Apache2::SubProcess (); my $r = Apache2::RequestUtil->request; print "Content-Type: text/plain\n\n"; print eval { $r->spawn_proc_prog('/usr/bin/touch', ['/tmp/test.touch']) } ? `ls -l /tmp/test.touch` : $@;

    Read the article

  • Simple Excel Export with EPPlus

    - by Jesse Taber
    Originally posted on: http://geekswithblogs.net/GruffCode/archive/2013/10/30/simple-excel-export-with-epplus.aspxAnyone I’ve ever met who works with an application that sits in front of a lot of data loves it when they can get that data exported to an Excel file for them to mess around with offline. As both developer and end user of a little website project that I’ve been working on, I found myself wanting to be able to get a bunch of the data that the application was collecting into an Excel file. The great thing about being both an end user and a developer on a project is that you can build the features that you really want! While putting this feature together I came across the fantastic EPPlus library. This library is certainly very well known and popular, but I was so impressed with it that I thought it was worth a quick blog post. This library is extremely powerful; it lets you create and manipulate Excel 2007/2010 spreadsheets in .NET code with a high degree of flexibility. My only gripe with the project is that they are not touting how insanely easy it is to build a basic Excel workbook from a simple data source. If I were running this project the approach I’m about to demonstrate in this post would be front and center on the landing page for the project because it shows how easy it really is to get started and serves as a good way to ease yourself in to some of the more advanced features. The website in question uses RavenDB, which means that we’re dealing with POCOs to model the data throughout all layers of the application. I love working like this so when it came time to figure out how to export some of this data to an Excel spreadsheet I wanted to find a way to take an IEnumerable<T> and just have it dumped to Excel with each item in the collection being modeled as a single row in the Excel worksheet. Consider the following class: public class Employee { public int Id { get; set; } public string Name { get; set; } public decimal HourlyRate { get; set; } public DateTime HireDate { get; set; } } Now let’s say we have a collection of these represented as an IEnumerable<Employee> and we want to be able to output it to an Excel file for offline querying/manipulation. As it turns out, this is dead simple to do with EPPlus. Have a look: public void ExportToExcel(IEnumerable<Employee> employees, FileInfo targetFile) { using (var excelFile = new ExcelPackage(targetFile)) { var worksheet = excelFile.Workbook.Worksheets.Add("Sheet1"); worksheet.Cells["A1"].LoadFromCollection(Collection: employees, PrintHeaders: true); excelFile.Save(); } } That’s it. Let’s break down what’s going on here: Create a ExcelPackage to model the workbook (Excel file). Note that the ‘targetFile’ value here is a FileInfo object representing the location on disk where I want the file to be saved. Create a worksheet within the workbook. Get a reference to the top-leftmost cell (addressed as A1) and invoke the ‘LoadFromCollection’ method, passing it our collection of Employee objects. Behind the scenes this is reflecting over the properties of the type provided and pulling out any public members to become columns in the resulting Excel output. The ‘PrintHeaders’ parameter tells EPPlus to grab the name of the property and put it in the first row. Save the Excel file All of the heavy lifting here is being done by the ‘LoadFromCollection’ method, and that’s a good thing. Now, this was really easy to do, but it has some limitations. Using this approach you get a very plain, un-styled Excel worksheet. The column widths are all set to the default. The number format for all cells is ‘General’ (which proves particularly interesting if you have a DateTime property in your data source). I’m a “no frills” guy, so I wasn’t bothered at all by trading off simplicity for style and formatting. That said, EPPlus has tons of samples that you can download that illustrate how to apply styles and formatting to cells and a ton of other advanced features that are way beyond the scope of this post.

    Read the article

  • Programming pattern to flatten deeply nested ajax callbacks?

    - by chiborg
    I've inherited JavaScript code where the success callback of an Ajax handler initiates another Ajax call where the success callback may or may not initiate another Ajax call. This leads to deeply nested anonymous functions. Maybe there is a clever programming pattern that avoids the deep-nesting and is more DRY. jQuery.extend(Application.Model.prototype, { process: function() { jQuery.ajax({ url:myurl1, dataType:'json', success:function(data) { // process data, then send it back jQuery.ajax({ url:myurl2, dataType:'json', success:function(data) { if(!data.ok) { jQuery.ajax({ url:myurl2, dataType:'json', success:mycallback }); } else { mycallback(data); } } }); } }); } });

    Read the article

  • Programatically find TFS changes since last good build

    - by abigblackman
    I have several branches in TFS (dev, test, stage) and when I merge changes into the test branch I want the automated build and deploy script to find all the updated SQL files and deploy them to the test database. I thought I could do this by finding all the changesets associated with the build since the last good build, finding all the sql files in the changesets and deploying them. However I don't seem to be having the changeset associated with the build for some reason so my question is twofold: 1) How do I ensure that a changeset is associated with a particular build? 2) How can I get a list of files that have changed in the branch since the last good build? I have the last successfully built build but I'm unsure how to get the files without checking the changesets (which as mentioned above are not associated with the build!)

    Read the article

  • Deployment of SQL compact Edition (SDF files) using Setup project

    - by Emad
    Hi, I have a C#.NET desktop application using SQL Compact edition as data store. The application should be used by any user on the machine and all should be seeing the same data ( data should not different per user). I am wondering where should I deploy the SDF file? User's Personal data folder (My Documents) means each user will have a separate database. Deploying on the same folder as the application causes vista to copy the file to \USers\Appdata\local\VirtualStore\ and it seems to make different copies for each user. Where is it best to deploy the SDF file to ensure all users are looking at the same data?

    Read the article

  • SQL SERVER – Powershell – Importing CSV File Into Database – Video

    - by pinaldave
    Laerte Junior is my very dear friend and Powershell Expert. On my request he has agreed to share Powershell knowledge with us. Laerte Junior is a SQL Server MVP and, through his technology blog and simple-talk articles, an active member of the Microsoft community in Brasil. He is a skilled Principal Database Architect, Developer, and Administrator, specializing in SQL Server and Powershell Programming with over 8 years of hands-on experience. He holds a degree in Computer Science, has been awarded a number of certifications (including MCDBA), and is an expert in SQL Server 2000 / SQL Server 2005 / SQL Server 2008 technologies. Let us read the blog post in his own words. I was reading an excellent post from my great friend Pinal about loading data from CSV files, SQL SERVER – Importing CSV File Into Database – SQL in Sixty Seconds #018 – Video,   to SQL Server and was honored to write another guest post on SQL Authority about the magic of the PowerShell. The biggest stuff in TechEd NA this year was PowerShell. Fellows, if you still don’t know about it, it is better to run. Remember that The Core Servers to SQL Server are the future and consequently the Shell. You don’t want to be out of this, right? Let’s see some PowerShell Magic now. To start our tour, first we need to download these two functions from Powershell and SQL Server Master Jedi Chad Miller.Out-DataTable and Write-DataTable. Save it in a module and add it in your profile. In my case, the module is called functions.psm1. To have some data to play, I created 10 csv files with the same content. I just put the SQL Server Errorlog into a csv file and created 10 copies of it. #Just create a CSV with data to Import. Using SQLErrorLog [reflection.assembly]::LoadWithPartialName(“Microsoft.SqlServer.Smo”) $ServerInstance=new-object (“Microsoft.SqlServer.Management.Smo.Server“) $Env:Computername $ServerInstance.ReadErrorLog() | export-csv-path“c:\SQLAuthority\ErrorLog.csv”-NoTypeInformation for($Count=1;$Count-le 10;$count++)  {       Copy-Item“c:\SQLAuthority\Errorlog.csv”“c:\SQLAuthority\ErrorLog$($count).csv” } Now in my path c:\sqlauthority, I have 10 csv files : Now it is time to create a table. In my case, the SQL Server is called R2D2 and the Database is SQLServerRepository and the table is CSV_SQLAuthority. CREATE TABLE [dbo].[CSV_SQLAuthority]( [LogDate] [datetime] NULL, [Processinfo] [varchar](20) NULL, [Text] [varchar](MAX) NULL ) Let’s play a little bit. I want to import synchronously all csv files from the path to the table: #Importing synchronously $DataImport=Import-Csv-Path ( Get-ChildItem“c:\SQLAuthority\*.csv”) $DataTable=Out-DataTable-InputObject$DataImport Write-DataTable-ServerInstanceR2D2-DatabaseSQLServerRepository-TableNameCSV_SQLAuthority-Data$DataTable Very cool, right? Let’s do it asynchronously and in background using PowerShell  Jobs: #If you want to do it to all asynchronously Start-job-Name‘ImportingAsynchronously‘ ` -InitializationScript  {IpmoFunctions-Force-DisableNameChecking} ` -ScriptBlock {    ` $DataImport=Import-Csv-Path ( Get-ChildItem“c:\SQLAuthority\*.csv”) $DataTable=Out-DataTable-InputObject$DataImport Write-DataTable   -ServerInstance“R2D2″`                   -Database“SQLServerRepository“`                   -TableName“CSV_SQLAuthority“`                   -Data$DataTable             } Oh, but if I have csv files that are large in size and I want to import each one asynchronously. In this case, this is what should be done: Get-ChildItem“c:\SQLAuthority\*.csv” | % { Start-job-Name“$($_)” ` -InitializationScript  {IpmoFunctions-Force-DisableNameChecking} ` -ScriptBlock { $DataImport=Import-Csv-Path$args[0]                $DataTable=Out-DataTable-InputObject$DataImport                Write-DataTable-ServerInstance“R2D2″`                               -Database“SQLServerRepository“`                               -TableName“CSV_SQLAuthority“`                               -Data$DataTable             } -ArgumentList$_.fullname } How cool is that? Let’s make the funny stuff now. Let’s schedule it on an SQL Server Agent Job. If you are using SQL Server 2012, you can use the PowerShell Job Step. Otherwise you need to use a CMDexec job step calling PowerShell.exe. We will use the second option. First, create a ps1 file called ImportCSV.ps1 with the script above and save it in a path. In my case, it is in c:\temp\automation. Just add the line at the end: Get-ChildItem“c:\SQLAuthority\*.csv” | % { Start-job-Name“$($_)” ` -InitializationScript  {IpmoFunctions-Force-DisableNameChecking} ` -ScriptBlock { $DataImport=Import-Csv-Path$args[0]                $DataTable=Out-DataTable-InputObject$DataImport                Write-DataTable-ServerInstance“R2D2″`                               -Database“SQLServerRepository“`                               -TableName“CSV_SQLAuthority“`                               -Data$DataTable             } -ArgumentList$_.fullname } Get-Job | Wait-Job | Out-Null Remove-Job -State Completed Why? See my post Dooh PowerShell Trick–Running Scripts That has Posh Jobs on a SQL Agent Job Remember, this trick is for  ALL scripts that will use PowerShell Jobs and any kind of schedule tool (SQL Server agent, Windows Schedule) Create a Job Called ImportCSV and a step called Step_ImportCSV and choose CMDexec. Then you just need to schedule or run it. I did a short video (with matching good background music) and you can see it at: That’s it guys. C’mon, join me in the #PowerShellLifeStyle. You will love it. If you want to check what we can do with PowerShell and SQL Server, don’t miss Laerte Junior LiveMeeting on July 18. You can have more information in : LiveMeeting VC PowerShell PASS–Troubleshooting SQL Server With PowerShell–English Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQL Utility, T SQL, Technology, Video Tagged: Powershell

    Read the article

  • Play video in UIWebView from NSData

    - by Papagalli
    Hello there, Does anybody know how to play a video in a UIVebView ? The video comes from the iphone library, when I pick it, I store it in core data as binary data. If i use the local url of the video it's working fine, the problem I have is that the this url refers to a folder named "tmp", and I don't know the lifetime of it. If I can get the real url of the video file it will be ok. //attachment is the core data object containing the data NSURL *url = [NSURL URLWithString:attachment.path]; [self.webView loadRequest:[NSURLRequest requestWithURL:url]]; I tried a solution that doesn't work: (i tried MIMETypes : "video/quicktime" and "video/mp4") [self.webView loadData:attachment.data MIMEType:@"video/mov" textEncodingName:nil baseURL:nil]; I think the second solution is the closest, and the problem comes from the MIMEType. Thanks by advance if anybody has a solution for this :)

    Read the article

  • Benefits of PerformancePoint Services Using SharePoint Server 2010

    - by Wayne
    What is PerformancePoint Services? Most of the time it happens that the metrics that make up your key performance indicators are not simple values from a data source. In SharePoint Server 2007 PerformancePoint Services, you could create two kinds of KPI metrics: Simple single value metrics from any supported data source or Complex multiple value metrics from a single Analysis Services data source using MDX. Now things are even easier with Performance Point Services in SharePoint 2010. Let us check what is it? PerformancePoint Services in SharePoint Server 2010 is a performance management service that you can use to monitor and analyze your business. By providing flexible, easy-to-use tools for building dashboards, scorecards, reports, and key performance indicators (KPIs), PerformancePoint Services can help everyone across an organization make informed business decisions that align with companywide objectives and strategy. Scorecards, dashboards, and KPIs help drive accountability. Integrated analytics help employees move quickly from monitoring information to analyzing it and, when appropriate, sharing it throughout the organization. Prior to the addition of PerformancePoint Services to SharePoint Server, Microsoft Office PerformancePoint Server 2007 functioned as a standalone server. Now PerformancePoint functionality is available as an integrated part of the SharePoint Server Enterprise license, as is the case with Excel Services in Microsoft SharePoint Server 2010. The popular features of earlier versions of PerformancePoint Services are preserved along with numerous enhancements and additional functionality. New PerformancePoint Services features PerformancePoint Services now can utilize SharePoint Server scalability, collaboration, backup and recovery, and disaster recovery capabilities. Dashboards and dashboard items are stored and secured within SharePoint lists and libraries, providing you with a single security and repository framework. New features and enhancements of SharePoint 2010 PerformancePoint Services • With PerformancePoint Services, functioning as a service in SharePoint Server, dashboards and dashboard items are stored and secured within SharePoint lists and libraries, providing you with a single security and repository framework. The new architecture also takes advantage of SharePoint Server scalability, collaboration, backup and recovery, and disaster recovery capabilities. You also can include and link PerformancePoint Services Web Parts with other SharePoint Server Web Parts on the same page. The new architecture also streamlines security models that simplify access to report data. • The Decomposition Tree is a new visualization report type available in PerformancePoint Services. You can use it to quickly and visually break down higher-level data values from a multi-dimensional data set to understand the driving forces behind those values. The Decomposition Tree is available in scorecards and analytic reports and ultimately in dashboards. • You can access more detailed business information with improved scorecards. Scorecards have been enhanced to make it easy for you to drill down and quickly access more detailed information. PerformancePoint scorecards also offer more flexible layout options, dynamic hierarchies, and calculated KPI features. Using this enhanced functionality, you can now create custom metrics that use multiple data sources. You can also sort, filter, and view variances between actual and target values to help you identify concerns or risks. • Better Time Intelligence filtering capabilities that you can use to create and use dynamic time filters that are always up to date. Other improved filters improve the ability for dashboard users to quickly focus in on information that is most relevant. • Ability to include and link PerformancePoint Services Web Parts together with other PerformancePoint Services Web parts on the same page. • Easier to author and publish dashboard items by using Dashboard Designer. • SQL Server Analysis Services 2008 support. • Increased support for accessibility compliance in individual reports and scorecards. • The KPI Details report is a new report type that displays contextually relevant information about KPIs, metrics, rows, columns, and cells within a scorecard. The KPI Details report works as a Web part that links to a scorecard or individual KPI to show relevant metadata to the end user in SharePoint Server. This Web part can be added to PerformancePoint dashboards or any SharePoint Server page. • Create analytics reports to better understand underlying business forces behind the results. Analytic reports have been enhanced to support value filtering, new chart types, and server-based conditional formatting. To conclude, PerformancePoint Services, by becoming tightly integrated with SharePoint Server 2010, takes advantage of many enterprise-level SharePoint Server 2010 features. Unfortunately, SharePoint Foundation 2010 doesn’t include this feature. There are still many choices in SharePoint family of products that include SharePoint Server 2010, SharePoint Foundation, SharePoint Server 2007 and associated free SharePoint web parts and templates.

    Read the article

  • Populating DataGrid with SQLResult Air Flex

    - by Deyon
    I been beating my self up all day on this...I'm about to call it quits and get some Chinese food -=\ I'm selecting data from a local SQL DB. [Bindable] public var ac:ArrayCollection; public function select():void { statement.sqlConnection=conn; statement.text="SELECT * FROM PROJECT"; statement.execute(); var res : SQLResult = statement.getResult(); ac = new ArrayCollection(res.data); //So this traces out [Object][object] So it works trace(ac); } If I do var myObj:Object=res.data[0]; I can trace myObj and view the data. But I don't know how to insert the data into a datagrid. mygrid.dataProvider=ac; dose not work. I'm using Flash Builder 4 Help please....

    Read the article

  • Java EE 6 and NoSQL/MongoDB on GlassFish using JPA and EclipseLink 2.4 (TOTD #175)

    - by arungupta
    TOTD #166 explained how to use MongoDB in your Java EE 6 applications. The code in that tip used the APIs exposed by the MongoDB Java driver and so requires you to learn a new API. However if you are building Java EE 6 applications then you are already familiar with Java Persistence API (JPA). Eclipse Link 2.4, scheduled to release as part of Eclipse Juno, provides support for NoSQL databases by mapping a JPA entity to a document. Their wiki provides complete explanation of how the mapping is done. This Tip Of The Day (TOTD) will show how you can leverage that support in your Java EE 6 applications deployed on GlassFish 3.1.2. Before we dig into the code, here are the key concepts ... A POJO is mapped to a NoSQL data source using @NoSQL or <no-sql> element in "persistence.xml". A subset of JPQL and Criteria query are supported, based upon the underlying data store Connection properties are defined in "persistence.xml" Now, lets lets take a look at the code ... Download the latest EclipseLink 2.4 Nightly Bundle. There is a Installer, Source, and Bundle - make sure to download the Bundle link (20120410) and unzip. Download GlassFish 3.1.2 zip and unzip. Install the Eclipse Link 2.4 JARs in GlassFish Remove the following JARs from "glassfish/modules": org.eclipse.persistence.antlr.jar org.eclipse.persistence.asm.jar org.eclipse.persistence.core.jar org.eclipse.persistence.jpa.jar org.eclipse.persistence.jpa.modelgen.jar org.eclipse.persistence.moxy.jar org.eclipse.persistence.oracle.jar Add the following JARs from Eclipse Link 2.4 nightly build to "glassfish/modules": org.eclipse.persistence.antlr_3.2.0.v201107111232.jar org.eclipse.persistence.asm_3.3.1.v201107111215.jar org.eclipse.persistence.core.jpql_2.4.0.v20120407-r11132.jar org.eclipse.persistence.core_2.4.0.v20120407-r11132.jar org.eclipse.persistence.jpa.jpql_2.0.0.v20120407-r11132.jar org.eclipse.persistence.jpa.modelgen_2.4.0.v20120407-r11132.jar org.eclipse.persistence.jpa_2.4.0.v20120407-r11132.jar org.eclipse.persistence.moxy_2.4.0.v20120407-r11132.jar org.eclipse.persistence.nosql_2.4.0.v20120407-r11132.jar org.eclipse.persistence.oracle_2.4.0.v20120407-r11132.jar Start MongoDB Download latest MongoDB from here (2.0.4 as of this writing). Create the default data directory for MongoDB as: sudo mkdir -p /data/db/sudo chown `id -u` /data/db Refer to Quickstart for more details. Start MongoDB as: arungup-mac:mongodb-osx-x86_64-2.0.4 <arungup> ->./bin/mongod./bin/mongod --help for help and startup optionsMon Apr  9 12:56:02 [initandlisten] MongoDB starting : pid=3124 port=27017 dbpath=/data/db/ 64-bit host=arungup-mac.localMon Apr  9 12:56:02 [initandlisten] db version v2.0.4, pdfile version 4.5Mon Apr  9 12:56:02 [initandlisten] git version: 329f3c47fe8136c03392c8f0e548506cb21f8ebfMon Apr  9 12:56:02 [initandlisten] build info: Darwin erh2.10gen.cc 9.8.0 Darwin Kernel Version 9.8.0: Wed Jul 15 16:55:01 PDT 2009; root:xnu-1228.15.4~1/RELEASE_I386 i386 BOOST_LIB_VERSION=1_40Mon Apr  9 12:56:02 [initandlisten] options: {}Mon Apr  9 12:56:02 [initandlisten] journal dir=/data/db/journalMon Apr  9 12:56:02 [initandlisten] recover : no journal files present, no recovery neededMon Apr  9 12:56:02 [websvr] admin web console waiting for connections on port 28017Mon Apr  9 12:56:02 [initandlisten] waiting for connections on port 27017 Check out the JPA/NoSQL sample from SVN repository. The complete source code built in this TOTD can be downloaded here. Create Java EE 6 web app Create a Java EE 6 Maven web app as: mvn archetype:generate -DarchetypeGroupId=org.codehaus.mojo.archetypes -DarchetypeArtifactId=webapp-javaee6 -DgroupId=model -DartifactId=javaee-nosql -DarchetypeVersion=1.5 -DinteractiveMode=false Copy the model files from the checked out workspace to the generated project as: cd javaee-nosqlcp -r ~/code/workspaces/org.eclipse.persistence.example.jpa.nosql.mongo/src/model src/main/java Copy "persistence.xml" mkdir src/main/resources cp -r ~/code/workspaces/org.eclipse.persistence.example.jpa.nosql.mongo/src/META-INF ./src/main/resources Add the following dependencies: <dependency> <groupId>org.eclipse.persistence</groupId> <artifactId>org.eclipse.persistence.jpa</artifactId> <version>2.4.0-SNAPSHOT</version> <scope>provided</scope></dependency><dependency> <groupId>org.eclipse.persistence</groupId> <artifactId>org.eclipse.persistence.nosql</artifactId> <version>2.4.0-SNAPSHOT</version></dependency><dependency> <groupId>org.mongodb</groupId> <artifactId>mongo-java-driver</artifactId> <version>2.7.3</version></dependency> The first one is for the EclipseLink latest APIs, the second one is for EclipseLink/NoSQL support, and the last one is the MongoDB Java driver. And the following repository: <repositories> <repository> <id>EclipseLink Repo</id> <url>http://www.eclipse.org/downloads/download.php?r=1&amp;nf=1&amp;file=/rt/eclipselink/maven.repo</url> <snapshots> <enabled>true</enabled> </snapshots> </repository>  </repositories> Copy the "Test.java" to the generated project: mkdir src/main/java/examplecp -r ~/code/workspaces/org.eclipse.persistence.example.jpa.nosql.mongo/src/example/Test.java ./src/main/java/example/ This file contains the source code to CRUD the JPA entity to MongoDB. This sample is explained in detail on EclipseLink wiki. Create a new Servlet in "example" directory as: package example;import java.io.IOException;import java.io.PrintWriter;import javax.servlet.ServletException;import javax.servlet.annotation.WebServlet;import javax.servlet.http.HttpServlet;import javax.servlet.http.HttpServletRequest;import javax.servlet.http.HttpServletResponse;/** * @author Arun Gupta */@WebServlet(name = "TestServlet", urlPatterns = {"/TestServlet"})public class TestServlet extends HttpServlet { protected void processRequest(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException { response.setContentType("text/html;charset=UTF-8"); PrintWriter out = response.getWriter(); try { out.println("<html>"); out.println("<head>"); out.println("<title>Servlet TestServlet</title>"); out.println("</head>"); out.println("<body>"); out.println("<h1>Servlet TestServlet at " + request.getContextPath() + "</h1>"); try { Test.main(null); } catch (Exception ex) { ex.printStackTrace(); } out.println("</body>"); out.println("</html>"); } finally { out.close(); } } @Override protected void doGet(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException { processRequest(request, response); } @Override protected void doPost(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException { processRequest(request, response); }} Build the project and deploy it as: mvn clean packageglassfish3/bin/asadmin deploy --force=true target/javaee-nosql-1.0-SNAPSHOT.war Accessing http://localhost:8080/javaee-nosql/TestServlet shows the following messages in the server.log: connecting(EISLogin( platform=> MongoPlatform user name=> "" MongoConnectionSpec())) . . .Connected: User: Database: 2.7  Version: 2.7 . . .Executing MappedInteraction() spec => null properties => {mongo.collection=CUSTOMER, mongo.operation=INSERT} input => [DatabaseRecord( CUSTOMER._id => 4F848E2BDA0670307E2A8FA4 CUSTOMER.NAME => AMCE)]. . .Data access result: [{TOTALCOST=757.0, ORDERLINES=[{DESCRIPTION=table, LINENUMBER=1, COST=300.0}, {DESCRIPTION=balls, LINENUMBER=2, COST=5.0}, {DESCRIPTION=rackets, LINENUMBER=3, COST=15.0}, {DESCRIPTION=net, LINENUMBER=4, COST=2.0}, {DESCRIPTION=shipping, LINENUMBER=5, COST=80.0}, {DESCRIPTION=handling, LINENUMBER=6, COST=55.0},{DESCRIPTION=tax, LINENUMBER=7, COST=300.0}], SHIPPINGADDRESS=[{POSTALCODE=L5J1H7, PROVINCE=ON, COUNTRY=Canada, CITY=Ottawa,STREET=17 Jane St.}], VERSION=2, _id=4F848E2BDA0670307E2A8FA8,DESCRIPTION=Pingpong table, CUSTOMER__id=4F848E2BDA0670307E2A8FA7, BILLINGADDRESS=[{POSTALCODE=L5J1H8, PROVINCE=ON, COUNTRY=Canada, CITY=Ottawa, STREET=7 Bank St.}]}] You'll not see any output in the browser, just the output in the console. But the code can be easily modified to do so. Once again, the complete Maven project can be downloaded here. Do you want to try accessing relational and non-relational (aka NoSQL) databases in the same PU ?

    Read the article

  • Creating line graph/chart in vb.net (VS2008)

    - by typoknig
    I am reluctant to ask this question because a lot of similar questions have been asked, but after reading through them I am not getting the info I need. I am trying to follow this tutorial and I think it is going to work ok, but I have a lot of data to put in and the tutorial has the reader create the chart data points manually. I want the data points to be generated from an integer which can change while the program is running (thus the chart size needs to change) and the y coordinate of the data points needs to come from an array. I have attempted to "bind" the data but I am messing it up somehow and I don't even think that is the best way to do what I want. Also, I do not have to use the methods suggested in the tutorial, I am looking for the highest quality most efficient way to generate a line graph in vb.net (VS2008) based on the criteria I previously mentioned.

    Read the article

  • Remove characters after specific character in string, then remove substring?

    - by sah302
    I feel kind of dumb posting this when this seems kind of simple and there are tons of questions on strings/characters/regex, but I couldn't find quite what I needed (except in another language: http://stackoverflow.com/questions/2176544/remove-all-text-after-certain-point). I've got the following code: [Test] public void stringManipulation() { String filename = "testpage.aspx"; String currentFullUrl = "http://localhost:2000/somefolder/myrep/test.aspx?q=qvalue"; String fullUrlWithoutQueryString = currentFullUrl.Replace("?.*", ""); String urlWithoutPageName = fullUrlWithoutQueryString.Remove(fullUrlWithoutQueryString.Length - filename.Length); String expected = "http://localhost:2000/somefolder/myrep/"; String actual = urlWithoutPageName; Assert.AreEqual(expected, actual); } I tried the solution in the question above (hoping the syntax would be the same!) but nope. I want to first remove the queryString which could be any variable length, then remove the page name, which again could be any length. How can I get the remove the query string from the full URL such that this test passes?

    Read the article

  • Java Classpath Problems in Ubuntu

    - by Travis
    First off I'm running Ubuntu 9.10 I've edited the /etc/environment file to look like this: PATH="/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games" JAVA_HOME="/usr/lib/jvm/java-6-sun-1.6.0.20" CLASSPATH="/home/travis/freetts/lib/freetts.jar:/home/travis/freetts/lib/jsapi.jar:." I then run "source /etc/environment" to make sure the changes are included. Then I try compiling my simple test program using this: javac Test.java It throws out a few errors, but when I compile like this: javac -cp /home/travis/freetts/lib/freetts.jar:/home/travis/freetts/lib/jsapi.jar:. Test.java It works just fine, this leads me to believe that for some reason javac isn't seeing the CLASSPATH environment variable? I can echo it and everything in the terminal: echo $CLASSPATH gives me what I put in. Any help on this would be greatly appreciated.

    Read the article

  • Why isn't my assets folder being installed on emulator?

    - by Brad Hein
    Where are my assets being installed to? I utilize an assets folder in my new app. I have two files in the folder. When I install my app on the emulator, I cannot access my assets, and furthermore I cannot see them on the emulator filesystem. Extracted my apk and confirmed the assets folder exists: $ ls -ltr assets/ total 16 -rw-rw-r--. 1 brad brad 1050 2010-05-20 00:33 schema-DashDB.sql -rw-rw-r--. 1 brad brad 9216 2010-05-20 00:33 dash.db On the emulator, no assets folder: # pwd /data/data/com.gtosoft.dash # ls -l drwxr-xr-x system system 2010-05-20 00:46 lib # I just want to package a pre-built database with my app and then open it to obtain data when needed. Just tried it on my Moto Droid, unable to access/open the DB, just like the emulator: DBFile=/data/data/com.gtosoft.dash/assets/dash.db Building the DB on the fly from a schema file is out of the question because its such a slow process (about 5-10 statements per second is all I get for throughput).

    Read the article

  • Objective-C (iPhone) Memory Management

    - by Steven
    I'm sorry to ask such a simple question, but it's a specific question I've not been able to find an answer for. I'm not a native objective-c programmer, so I apologise if I use any C# terms! If I define an object in test.h @interface test : something { NSString *_testString; } Then initialise it in test.m -(id)init { _testString = [[NSString alloc] initWithString:@"hello"]; } Then I understand that I would release it in dealloc, as every init should have a release -(void)dealloc { [_testString release]; } However, what I need clarification on is what happens if in init, I use one of the shortcut methods for object creation, do I still release it in dealloc? Doesn't this break the "one release for one init" rule? e.g. -(id)init { _testString = [NSString stringWithString:@"hello"]; } Thanks for your helps, and if this has been answered somewhere else, I apologise!! Steven

    Read the article

  • Testing across multiple sessions in merb using webrat

    - by m7d
    I want to test across multiple sessions using webrat in merb. Apparently, this is fairly easy to accomodate in Rails via: http://erikonrails.snowedin.net/?p=159. Following the same logic, I am trying to do something that follows that pattern for merb and webrat. Here is an attempt (which does not work because MerbAdapter does not respond to visit and other webrat session methods; I don't want to take too much more time with this so I have stopped here for now): # defined in test.rb environment file module Merb #:nodoc: module Test #:nodoc: module RequestHelper #:nodoc: def in_a_separate_session old = @_webrat_session.response.clone @_webrat_session = Webrat::MerbAdapter.new yield @_webrat_session.response = old end end end end I tried a few other ideas, but obviously I am missing something. Anyone else know how this would be done in merb? I think I could specify a cookie jar using the request mock, but I prefer to do this with webrat.

    Read the article

  • Perl :Getting the name of the current subroutine

    - by kiruthika
    Hi all, In perl we can get the name of the current package and current line number Using the predefined variables like _PACKAGE and __LINE . Like this I want to get the name of the current subroutine. Example use strict; use warnings; print __PACKAGE__; sub test() { print __LINE__; } &test(); In the above code I want to get the name of the subroutine inside the function 'test'. Thanks in advance.

    Read the article

  • MVC2 Areas and unit testing for routes

    - by Alexander Shapovalov
    Hello, I want to test my routes in unit tests. But Areas is not working in my unit tests. Is it possible to test ASP.NET MVC 2 routes for Areas? I am using this code [SetUp] public void SetUp() { this.routes = new RouteCollection(); MvcApplication.RegisterRoutes(this.routes); } #endregion private RouteCollection routes; [Test] public void Should_Navigate_To_AdminUser_Controller_EditUser_Method() { HttpContextBase fackeCtx = CreateFackeContext("~/Admin/User/Edit/3"); RouteData routeData = this.routes.GetRouteData(fackeCtx); Assert.IsNotNull(routeData, "Route is not defined!"); Assert.AreEqual("Edit", routeData.Values["action"]); Assert.AreEqual("User", routeData.Values["controller"]); Assert.AreEqual("3", routeData.Values["id"]); }

    Read the article

  • Multithreading/Parallel Processing in PHP

    - by manyxcxi
    I have a PHP script that will generate a report using PHPExcel from data queried from a MySQL DB. Currently, it is linear in processing in that it gets the data back from MySQL, reads in the Excel template, writes the data to the template, then outputs it. I have optimized the code to the point that the data is only iterated over once, and there is very little processing done on the PHP side. The query returns hundreds of lines in less than .001 seconds, so it is running fast enough. After some timing I have found my bottlenecks to be (surprise, surprise) reading the template and writing the output. I would like to do this: Spawn a thread/process to read the template Spawn a thread/process to fetch the data Return back to parent thread - Parent thread will wait until both are complete Proceed on as normal My main questions are is this possible, is it worth it? If yes to both, how would you tackle it? Also, it is PHP 5 on CentOS

    Read the article

  • Asp.Net Mvc JQuery ajax input parameters are null

    - by Dofs
    Hi, I am trying to post some data with jQuery Ajax, but the parameters in my Ajax method are null. This is simple test to send data: var dataPost = { titel: 'titel', message: 'msg', tagIds: 'hello' }; jQuery.ajax({ type: "POST", url: "Create", contentType: 'application/json; charset=utf-8', data: $.toJSON(dataPost), dataType: "json", success: function(result) { alert("Data Returned: "); } }); And my Ajax method looks like this: [HttpPost] public ActionResult Create(string title, string message, string tagIds) {... } There is something basic wrong with the data I send, but I can't figure out what. All the time the title, message and tagIds are null, so there is something wrong with the encoding, I just don't know what. Note: The jQuery.toJSON is this plugin

    Read the article

< Previous Page | 775 776 777 778 779 780 781 782 783 784 785 786  | Next Page >