Search Results

Search found 91480 results on 3660 pages for 'large data in sharepoint list'.

Page 283/3660 | < Previous Page | 279 280 281 282 283 284 285 286 287 288 289 290  | Next Page >

  • Triggering Data Changes in N-Tier

    - by Ryan Kinal
    I've been studying n-tier architectures as of late, particularly in VB.NET with Entity Framework and/or LINQ to SQL. I understand the basic concepts, but have been wondering about best practices in regard to triggering CRUD-type operations from user input/action. So, the arcitecture looks something like the following: [presentation layer] - [business layer] - [data layer] - (database) Getting information from the database into the presentation layer is simple and abstracted. It's just a matter of instantiating a new object from the business layer, which in turn uses the data layer to get at the correct information. However, saving (updating and inserting), and deleting seem to require particular APIs on the relevant business objects. I have to assume this is standard practice, unless a business object will save itself on various operations (which seems inefficient), or on disposal (which seems like it just wouldn't work, or may be unwieldy and unreliable). Should my "savable" business objects all implement a particular "ISavable" or "IDatabaseObject" interface? Is this a recognized (anti-)pattern? Are there other, better patterns I should be using that I'm just unaware of? The TLDR question, I suppose, is How does the presentation layer trigger database changes?

    Read the article

  • Unified data source for k2 installed Joomla websites

    - by Özkan ÖZLÜ
    I am responsible for a few web sites of my organization. I use Joomla! 2.5.9 for those web sites. They all are running at the same server. I use K2 component for content managing. I have a general website in which shows all the staff information at the 'Staff' page. Also some of those people and their contents are shown in another department's website. So, there are databases for each web site. For example: In the general website (let's say general.org), when I click on the 'Staff' menu item, page shows all of the people work at my organization. Also they work at different departments. In another web site (eg: education.general.org) when I click on the 'Staff' menu item, it shows the people work at education department. But for each web site, I have different user accounts which means a modification in one of them does not affect the other one. If the one of the education staff tries to change his profile picture on the education web site, he also has to do it on the general web site. And sometimes one person might be working at two departments. Thus he has to edit three times of his data. Is it possible to merge the records for all websites? In other words, I want everyone to insert/update their data on the general web site, and the other web sites will be updated automatically.

    Read the article

  • Having the same texture data in different ID3D11Texture2D

    - by bdmnd
    Sorry if this has been answered elsewhere - I'm rather new to DX. My question concerns conservation of resources - specifically textures in VRAM. I assume that upon returning from a call to CreateTexture2D, a copy of any textures data supplied has been copied elsewhere, likely VRAM. Does DX11 have any facility for having multiple ID3D11Texture2D objects which point to the same data? This might at first seem silly, but imagine a ID3D11Texture2D which is an array of textures. In one material, an artist has chosen to blend three identically sized maps, saved on disk as A.dds, B.dds, and C.dds. Then imagine they have another material which also uses three maps, but this time A.dds, B.dds, and D.dds. The shader code knows the diffuse texture is a texture array, and also has the number of layers baked (three in each case). I would essentially like to set up just two ID3D11Texture2D objects, one for each material, but I don't want to waste VRAM for two identical copies of A.dds and B.dds. I could use explicit texture arrays, of course, but this reduces the number of resources available to the shader and can complicate code somewhat more than would otherwise be needed.

    Read the article

  • Preferred way for dealing with customer-defined data in enterprise application

    - by Axarydax
    Let's say that we have a small enterprise web (intranet) application for managing data for car dealers. It has screens for managing customers, inventory, orders, warranties and workshops. This application is installed at 10 customer sites for different car dealers. First version of this application was created without any way to provide for customer-specific data. For example, if dealer A wanted to be able to attach a photo to a customer, dealer B wanted to add e-mail contact to each workshop, and dealer C wanted to attach multiple PDF reports to a warranty, each and every feature like this was added to the application, so all of the customers received everything on new update. However, this will inevitably lead to conflicts as the number of customers grow as their usage patterns are unique, and if, for instance, a specific dealer requested to have an ability to attach (for some reason) a color of inventory item (and be able to search by this color) as a required item, others really wouldn't need this feature, and definitely will not want it to be a required item. Or, one dealer would like to manage e-mail contacts for their employees on a separate screen of the application. I imagine that a solution for this is to use a kind of plugin system, where we would have a core of the application that provides for standard features like customers, inventory, etc, and all of the customer's installed plugins. There would be different kinds of plugins - standalone screens like e-mail contacts for employees, with their own logic, and customer plugin which would extend or decorate inventory items (like photo or color). Inventory (customer,order,...) plugins would require to have installation procedure, hooks for plugging into the item editor, item displayer, item filtering for searching, backup hook and such. Is this the right way to solve this problem?

    Read the article

  • Storing Tiled Level Data in J2ME game

    - by Alex
    I'm developing a J2ME game which uses tiled backgrounds for the levels. My question is how do I store this tile information in my game. At the moment it is stored as an array; with each number representing a different tile from the tile-sheet. This works well enough, however I don't like the fact that it is 'hard-coded' into the game because (at least in my opinion) it is harder to edit the levels, or design new ones. I was also thinking that it would be difficult if you wanted to add a 'level pack', I'm not sure on how this would be achieved though; it's not something I was planning on doing, I'm just curious. I was wondering if there was a way I could store level data in some external file and then load this in to the game. The problem is I don't know what the limitations are for J2ME regarding file I/O, can it read in any file like Java? I am aware of the RMS, but from my experience I don't think this would work (unless I am mistaken). Also, would loading the data in this way be too big a performance hit? Or is there another way I can achieve what I am trying to do. As I said, the way I have it at the moment works fine, and if this is the only viable option then it will suffice.

    Read the article

  • Why is my USB data transfer so slow?

    - by Dave M G
    Whenever I do any kind of file transfer using USB, whether to a USB stick, or with my Android phone, or anything else, it is ridiculously slow. It says 59.8 KB/sec, which would be an awesome speed if this were 1991 and I was using a modem to dial up to my local BBS. Surely USB technology is better than that...? 37 seconds to move less data than the equivelent of 1 MP3 file? Also, regardless of what it says about speed and time, the reality is much, much slower. I routinely see it say something like "37 seconds left" and have to wait for minutes. Sometimes, if I want to move large amounts of files, it can say it will take 8 hours or more. Is this normal? My computer may not be the most awesome on the market, and about a year old, but it's an i5 with 4GB RAM and modern components, so surely this isn't the hardware's fault. What can I do to get better USB data transfer performance? Also, I did look at this question, but my newbie eyes don't see anything that look like an actual solution, just a lot of discussion about what transfer rates could or should be. Update: As requested in the comments, I've generated a whole bunch of output from the command line, and put it on Ubuntu Pastebin. Please see it here.

    Read the article

  • New whitepaper: Evolution from the Traditional Data Center to Exalogic: An Operational Perspective

    - by Javier Puerta
    IT organizations are struggling with the need to balance the day-to-day concerns of data center management against the business level requirements to deliver long-term value. This balancing act has proven difficult and inefficient: systems and application management tools are resource intensive and traditional infrastructure management architectures have developed over time on a project by project basis. These traditional management systems consist of multiple tools that require administrators to waste time performing too many steps to handle routine administrative tasks. Operational efficiency and agility in your enterprise are directly linked to the capabilities provided by the management layer across the entire stack, from the application, middleware, operating system, compute, network and storage. Only when this end to end capability is provided will we experience the full benefit of a scalable, efficient, responsive and secure datacenter. Managing Exalogic is substantially less complex and error prone than managing traditional systems built from individually sourced, multi-vendor components because Exalogic is designed to be administered and maintained as a single, integrated system (Figure 1). It is at the forefront of the industry-wide shift away from costly and inferior one-off platforms toward private clouds and Engineered Systems. Read the full whitepaper "Evolution from the Traditional Data Center to Exalogic: An Operational Perspective". Full document is available for download at the Exadata Partner Community Collaborative Workspace (for community members only - if you get an error message, please register for the Community first).

    Read the article

  • New Whitepaper: Evolution from the Traditional Data Center to Exalogic: An Operational Perspective

    - by Javier Puerta
    IT organizations are struggling with the need to balance the day-to-day concerns of data center management against the business level requirements to deliver long-term value. This balancing act has proven difficult and inefficient: systems and application management tools are resource intensive and traditional infrastructure management architectures have developed over time on a project by project basis. These traditional management systems consist of multiple tools that require administrators to waste time performing too many steps to handle routine administrative tasks. Operational efficiency and agility in your enterprise are directly linked to the capabilities provided by the management layer across the entire stack, from the application, middleware, operating system, compute, network and storage. Only when this end to end capability is provided will we experience the full benefit of a scalable, efficient, responsive and secure datacenter. Managing Exalogic is substantially less complex and error prone than managing traditional systems built from individually sourced, multi-vendor components because Exalogic is designed to be administered and maintained as a single, integrated system (Figure 1). It is at the forefront of the industry-wide shift away from costly and inferior one-off platforms toward private clouds and Engineered Systems. Read the full whitepaper "Evolution from the Traditional Data Center to Exalogic: An Operational Perspective". Full document is available for download at the Exadata Partner Community Collaborative Workspace (for community members only - if you get an error message, please register for the Community first).

    Read the article

  • Large number of soft page faults when assigning a TJpegImage to a TBitmap

    - by Robert Oschler
    I have a Delphi 6 Pro application that processes incoming jpeg frames from a streaming video server. The code works but I recently noticed that it generates a huge number of soft page faults over time. After doing some investigation, the page faults appear to be coming from one particular graphics operation. Note, the uncompressed bitmaps in question are 320 x 240 or about 300 KB in size so it's not due to the handling of large images. The number of page faults being generated isn't tolerable. Over an hour it can easily top 1000000 page faults. I created a stripped down test case that executes the code I have included below on a timer, 10 times a second. The page faults appear to happen when I try to assign the TJpegImage to a TBitmap in the GetBitmap() method. I know this because I commented out that line and the page faults do not occur. The assign() triggers a decompression operation on the part of TJpegImage as it pushes the decompressed bits into a newly created bitmap that GetBitmap() returns. When I run Microsoft's pfmon utility (page fault monitor), I get a huge number of soft page fault error lines concerning RtlFillMemoryUlong, so it appears to happen during a memory buffer fill operation. One puzzling note. The summary part of pfmon's report where it shows which DLL caused what page fault does not show any DLL names in the far left column. I tried this on another system and it happens there too. Can anyone suggest a fix or a workaround? Here's the code. Note, IReceiveBufferForClientSocket is a simple class object that holds bytes in an accumulating buffer. function GetBitmap(theJpegImage: TJpegImage): Graphics.TBitmap; begin Result := TBitmap.Create; Result.Assign(theJpegImage); end; // --------------------------------------------------------------- procedure processJpegFrame(intfReceiveBuffer: IReceiveBufferForClientSocket); var theBitmap: TBitmap; theJpegStream, theBitmapStream: TMemoryStream; theJpegImage: TJpegImage; begin theBitmap := nil; theJpegImage := TJPEGImage.Create; theJpegStream:= TMemoryStream.Create; theBitmapStream := TMemoryStream.Create; try // 2 // ************************ BEGIN JPEG FRAME PROCESSING // Load the JPEG image from the receive buffer. theJpegStream.Size := intfReceiveBuffer.numBytesInBuffer; Move(intfReceiveBuffer.bufPtr^, theJpegStream.Memory^, intfReceiveBuffer.numBytesInBuffer); theJpegImage.LoadFromStream(theJpegStream); // Convert to bitmap. theBitmap := GetBitmap(theJpegImage); finally // Free memory objects. if Assigned(theBitmap) then theBitmap.Free; if Assigned(theJpegImage) then theJpegImage.Free; if Assigned(theBitmapStream) then theBitmapStream.Free; if Assigned(theJpegStream) then theJpegStream.Free; end; // try() end; // --------------------------------------------------------------- procedure TForm1.Timer1Timer(Sender: TObject); begin processJpegFrame(FIntfReceiveBufferForClientSocket); end; // --------------------------------------------------------------- procedure TForm1.FormCreate(Sender: TObject); var S: string; begin FIntfReceiveBufferForClientSocket := TReceiveBufferForClientSocket.Create(1000000); S := loadStringFromFile('c:\test.jpg'); FIntfReceiveBufferForClientSocket.assign(S); end; // --------------------------------------------------------------- Thanks, Robert

    Read the article

  • Categorising a large mysql result with php while?

    - by Ryan
    I'm using the following to grab my large result set from a mysql db: $discresult = 'SELECT t.id, t.subject, t.topicimage, t.topictype, c.user_id, c.disc_id FROM topics AS t LEFT JOIN collections AS c ON t.id=c.disc_id WHERE c.user_id='.$user_id; $userdiscs = $db->query($discresult) or error('Error.', __FILE__, __LINE__, $db->error()); This returns a list of all items that the user owns. I'm then needing to categorise these items based on the value of the "topictype" column, which Im currently doing by using: <h2>Category 1</h2> <?php while ($cur_img = mysql_fetch_array($userdiscs)) { if ($cur_img['topictype']=="cat-1") { if ($cur_img['topicimage']!="") { echo "<div><a href=\"viewtopic.php?id=".$cur_img['id']."\" title=\"".$cur_img['subject']."\"><img src=\"".$cur_img['topicimage']."\" style=\"width:60px; height: 60px\" /></a><br /><a href=\"viewtopic.php?id=".$cur_img['id']."\" title=\"".$cur_img['subject']."\">".$cur_img['subject']."</a></div>"; } else { echo "<div><a href=\"viewtopic.php?id=".$cur_img['id']."\" title=\"".$cur_img['subject']."\"><img src=\"img/no-disc-art.jpg\" style=\"width:60px; height: 60px\" /></a><br /><a href=\"viewtopic.php?id=".$cur_img['id']."\" title=\"".$cur_img['subject']."\">".$cur_img['subject']."</a></div>"; } } } mysql_data_seek($userdiscs, 0); ?> <h2>Category 2</h2> <?php while ($cur_img = mysql_fetch_array($userdiscs)) { if ($cur_img['topictype']=="cat-2") { if ($cur_img['topicimage']!="") { echo "<div><a href=\"viewtopic.php?id=".$cur_img['id']."\" title=\"".$cur_img['subject']."\"><img src=\"".$cur_img['topicimage']."\" style=\"width:60px; height: 60px\" /></a><br /><a href=\"viewtopic.php?id=".$cur_img['id']."\" title=\"".$cur_img['subject']."\">".$cur_img['subject']."</a></div>"; } else { echo "<div><a href=\"viewtopic.php?id=".$cur_img['id']."\" title=\"".$cur_img['subject']."\"><img src=\"img/no-disc-art.jpg\" style=\"width:60px; height: 60px\" /></a><br /><a href=\"viewtopic.php?id=".$cur_img['id']."\" title=\"".$cur_img['subject']."\">".$cur_img['subject']."</a></div>"; } } } mysql_data_seek($userdiscs, 0); ?> This works fine when I rinse and repeat the code, but as the site grows I expect I'll run into problems as the number of "topictype" options increases (expecting around 30 categories). I dont want to have to make seperate queries for each categorised group of discs either, as Id eventually have 30 queries being run as categories increase, so hoping to hear some suggestions or alternative approaches :) Thanks

    Read the article

  • Cutting large XML file into smaller pieces in C#

    - by NDraskovic
    I have a problem that I'm working on for quite some time now. I have an XML file with over 50000 records (one record has 3 levels). This file is used by one of my applications to control document sending (the record holds, among other informations, the type of document that has to be sent to a certain person). So in my application I load the XML file into a XmlDocument, and then by using SelectNodes method, I create a XmlNodeList from which I read the data I want. The process is like this - our worker takes the persons ID card (simple eith barcode) and reads it with barcode reader. When the barcode value has been read, my application finds the person with that ID in the XML file, and stores the type of the document into a string variable. Then the worker takes the document and reads its barcode, and if the value of documents barcode and the value in the value in the string variable match, the application makes a record that document of type xxxxxxxx will be sent to the person with ID yyyyyyyyy. This is very simple code, it works perfectly for now, and this is how it looks: On textBox1_TextChanged event (worker read persons ID): foreach(XmlNode node in NodeList){ if(String.Compare(node.Attributes.GetNamedItem("ID").Value.ToString(),textBox1.Text)==0) { ControlString = node.ChildNode[3].FirstChild.Attributes.GetNamedItem("doctype").Value.ToString(); break; } } textBox2.Focus(); And on textBox2_TextChanged event (worker read the documents barcode): if(String.Compare(textBox2.Text,ControlString)==0) { //Create a record and insert it into a SQL database } My question is - how will my application perform with larger XML files (I was told that the XML file might be up to 500,000 records large), will this approach be valid, or will I need to cut the file into smaller files. If I have to cut it, please give me an idea with some code samples, I've tried to do it like this: Reading entire record and storing it into a string: private void WriteXml(XmlNode record) { tempXML = record.InnerXml; temp = "<" + record.Name + " code=\"" + record.Attributes.GetNamedItem("code").Value + "\">" + Environment.NewLine; temp += tempXML + Environment.NewLine; temp += "</" + record.Name + ">"; SmallerXMLDocument += temp + Environment.NewLine; temp = ""; i++; } tempXML, temp and SmallerXMLDocument are all string variables. And then in button_Click method I load the XML file into a XmlNodeList (again by using XmlDocument.SelectNodes method) and I try to create one big string value that would hold all records like this: foreach(XmlNode node in nodes) { if(String.Compare(node.ChildNode[3].FirstChild.Attributes.GetNamedItem("doctype").Value.ToString(),doctype1)==0) { WriteXML(node); } } My idea was to create a string value (in this case called SmallerXmlDocument), and when I pass trough the entire XML file, to simply copy the value of that string into a new file. This works, but only for files that have up to 2000 records (and my has way more than that). So, if I need to cut the file into smaller pieces, what would be the best way to do it (keep in mind that there could be up to half a million records in a XML file)? Thanks

    Read the article

  • Split a Large File In C++

    - by wdow88
    Hey all, I'm trying to write a program that takes a large file (of any time) and splits it into many smaller "chunks". I think I have the basic idea down, but for some reason I cannot create a chunk size over 12,000 bites. I know there are a few solutions on google, etc. but I am more interested in learning what the origin of this limitation is then actually using the program to split files. //This file splits are larger into smaller files of a user inputted size. #include<iostream> #include<fstream> #include<string> #include<sstream> #include <direct.h> #include <stdlib.h> using namespace std; void GetCurrentPath(char* buffer) { _getcwd(buffer, _MAX_PATH); } int main() { // use the function to get the path char CurrentPath[_MAX_PATH]; GetCurrentPath(CurrentPath);//Get the current directory (used for displaying output) fstream bigFile; string filename; int partsize; cout << "Enter a file name: "; cin >> filename; //Recieve target file cout << "Enter the number of bites in each smaller file: "; cin >> partsize; //Recieve volume size bigFile.open(filename.c_str(),ios::in | ios::binary); bigFile.seekg(0, ios::end); // position get-ptr 0 bytes from end int size = bigFile.tellg(); // get-ptr position is now same as file size bigFile.seekg(0, ios::beg); // position get-ptr 0 bytes from beginning for (int i = 0; i <= (size / partsize); i++) { //Build File Name string partname = filename; //The original filename string charnum; //archive number stringstream out; //stringstream object out, used to build the archive name out << "." << i; charnum = out.str(); partname.append(charnum); //put the part name together //Write new file part fstream filePart; filePart.open(partname.c_str(),ios::out | ios::binary); //Open new file with the name built above //Check if near the end of file if (bigFile.tellg() < (size - (size%partsize))) { filePart.write(reinterpret_cast<char *>(&bigFile),partsize); //Write the selected amount to the file filePart.close(); //close file bigFile.seekg(partsize, ios::cur); //move pointer to next position to be written } //Changes the size of the last volume because it is the end of the file else { filePart.write(reinterpret_cast<char *>(&bigFile),(size%partsize)); //Write the selected amount to the file filePart.close(); //close file } cout << "File " << CurrentPath << partname << " produced" << endl; //display the progress of the split } bigFile.close(); cout << "Split Complete." << endl; return 0; } Any ideas? Thanks!

    Read the article

  • Using Microsoft.Reporting.WebForms.ReportViewer in a custom SharePoint WebPart

    - by iHeartDucks
    I have a requirement where I have to display some data (from a custom db) and let the user export it to an excel file. I decided to use the ReportViewer control present in Microsoft.Reporting.WebForms.ReportViewer. I added the required assembly to the project references and I add the following code to test it out protected override void CreateChildControls() { base.CreateChildControls(); objRV = new Microsoft.Reporting.WebForms.ReportViewer(); objRV.ID = "objRV"; this.Controls.Add(objRV); } The first error asked me to add this line in the web.config which I did and the next error says The type 'Microsoft.SharePoint.Portal.Analytics.UI.ReportViewerMessages, Microsoft.SharePoint.Portal, Version=12.0.0.0, Culture=neutral, PublicKeyToken=71e9bce111e9429c' does not implement IReportViewerMessages Is it possible to use ReportViewer in my custom Web Part? I rather not bind a repeater and write my own export to excel code. I want to use something which is already built by Microsoft? Any ideas on what I can reuse? Edit I commented the following line <add key="ReportViewerMessages"... and now my code looks like this after I added a data source to it protected override void CreateChildControls() { base.CreateChildControls(); objRV = new Microsoft.Reporting.WebForms.ReportViewer(); objRV.ID = "objRV"; objRV.Visible = true; Microsoft.Reporting.WebForms.ReportDataSource datasource = new Microsoft.Reporting.WebForms.ReportDataSource("test", GroupM.Common.DB.GetAllClientCodes()); objRV.LocalReport.DataSources.Clear(); objRV.LocalReport.DataSources.Add(datasource); objRV.LocalReport.Refresh(); this.Controls.Add(objRV); } but now I do not see any data on the page. I did check my db call and it does return a data table with 15 rows. Any ideas why I don't see anything on the page?

    Read the article

  • WCF- Large Data

    - by Pinu
    I have a WCF Web Service with basicHTTPBinding , the server side the transfer mode is streamedRequest as the client will be sending us large file in the form of memory stream. But on client side when i use transfer mode as streamedRequest its giving me a this errro "The remote server returned an error: (400) Bad Request" And when i look in to trace information , i see this as the error message Exception: There is a problem with the XML that was received from the network. See inner exception for more details. InnerException: The body of the message cannot be read because it is empty. I am able to send up to 5MB of data using trasfermode as buffered , but it will effect the performance of my web service in the long run , if there are many clients who are trying to access the service in buffered transfer mode. SmartConnect.Service1Client Serv = new SmartConnectClient.SmartConnect.Service1Client(); SmartConnect.OrderCertMailResponse OrderCert = new SmartConnectClient.SmartConnect.OrderCertMailResponse(); OrderCert.UserID = "abcd"; OrderCert.Password = "7a80f6623"; OrderCert.SoftwareKey = "90af1"; string applicationDirectory = @"\\inid\utty\Bran"; byte[] CertMail = File.ReadAllBytes(applicationDirectory + @"\5mb_test.zip"); MemoryStream str = new MemoryStream(CertMail); //OrderCert.Color = true; //OrderCert.Duplex = false; //OrderCert.FirstClass = true; //OrderCert.File = str; //OrderCert.ReturnAddress1 = "Test123"; //OrderCert.ReturnAddress2 = "Test123"; //OrderCert.ReturnAddress3 = "Test123"; //OrderCert.ReturnAddress4 = "Test123"; OrderCert.File = str; //string OrderNumber = ""; //string Password = OrderCert.Password; //int ReturnCode = 0; //string ReturnMessage = ""; //string SoftwareKey = OrderCert.SoftwareKey; //string UserID = OrderCert.UserID; //OrderCert.File = str; MemoryStream FileStr = str; Serv.OrderCertMail(OrderCert); // Serv.OrderCertMail(ref OrderNumber, ref Password, ref ReturnCode, ref ReturnMessage, ref SoftwareKey, ref UserID, ref FileStr ); lblON.Text = OrderCert.OrderNumber; Serv.Close(); // My Web Service - Service Contract [OperationContract] OrderCertMailResponse OrderCertMail(OrderCertMailResponse OrderCertMail); [MessageContract] public class OrderCertMailResponse { string userID = ""; string password = ""; string softwareID = ""; MemoryStream file = null; //MemoryStream str = null; [MessageHeader] //[DataMember] public string UserID { get { return userID; } set { userID = value; } } [MessageHeader] //[DataMember] public string Password { get { return password; } set { password = value; } } [MessageHeader] //[DataMember] public string SoftwareKey { get { return softwareID; } set { softwareID = value; } } [MessageBodyMember] // [DataMember] public MemoryStream File { get { return file; } set { file = value; } } [MessageHeader] //[DataMember] public string ReturnMessage; [MessageHeader] //[DataMember] public int ReturnCode; [MessageHeader] public string OrderNumber; //// Control file fields //[MessageHeader] ////[DataMember] //public string ReturnAddress1; //[MessageHeader] ////[DataMember] //public string ReturnAddress2; //[MessageHeader] ////[DataMember] //public string ReturnAddress3; //[MessageHeader] ////[DataMember] //public string ReturnAddress4; //[MessageHeader] ////[DataMember] //public bool FirstClass; //[MessageHeader] ////[DataMember] //public bool Color; //[MessageHeader] ////[DataMember] //public bool Duplex; } [ServiceBehavior(IncludeExceptionDetailInFaults = true)] public class Service1 : IService1 { public OrderCertMailResponse OrderCertMail(OrderCertMailResponse OrderCertMail) { OrderService CertOrder = new OrderService(); ClientUserInfo Info = new ClientUserInfo(); ControlFileInfo Control = new ControlFileInfo(); //Info.Password = "f2496623"; // hard coded password for development testing purposes //Info.SoftwareKey = "6dbb71"; // hard coded software key this is a developement software key //Info.UserName = "sdfs"; // hard coded UserID - for testing Info.UserName = OrderCertMail.UserID.ToString(); Info.Password = OrderCertMail.Password.ToString(); Info.SoftwareKey = OrderCertMail.SoftwareKey.ToString(); //Control.ReturnAddress1 = OrderCertMail.ReturnAddress1; //Control.ReturnAddress2 = OrderCertMail.ReturnAddress2; //Control.ReturnAddress3 = OrderCertMail.ReturnAddress3; //Control.ReturnAddress4 = OrderCertMail.ReturnAddress4; //Control.CertMailFirstClass = OrderCertMail.FirstClass; //Control.CertMailColor = OrderCertMail.Color; //Control.CertMailDuplex = OrderCertMail.Duplex; //byte[] CertFile = new byte[0]; //byte[] CertFile = null; //string applicationDirectory = @"\\intrepid\utility\Bryan"; // byte[] CertMailFile = File.ReadAllBytes(applicationDirectory + @"\3mb_test.zip"); //MemoryStream str = new MemoryStream(CertMailFile); OrderCertMailResponseClass OrderCertResponse = CertOrder.OrderCertmail(Info,Control,OrderCertMail.File); OrderCertMail.ReturnMessage = OrderCertResponse.ReturnMessage.ToString(); OrderCertMail.ReturnCode = Convert.ToInt32(OrderCertResponse.ReturnCode.ToString()); OrderCertMail.OrderNumber = OrderCertResponse.OrderNumber; return OrderCertMail; }

    Read the article

  • Slow select when inserting large amounts of data (MYSQL)

    - by siannopollo
    I have a process that imports a lot of data (950k rows) using inserts that insert 500 rows at a time. The process generally takes about 12 hours, which isn't too bad. Normally doing a query on the table is pretty quick (under 1 second) as I've put (what I think to be) the proper indexes in place. The problem I'm having is trying to run a query when the import process is running. It is making the query take almost 2 minutes! What can I do to make these two things not compete for resources (or whatever)? I've looked into "insert delayed" but not sure I want to change the table to MyISAM. Thanks for any help!

    Read the article

  • ASP.NET Web API and Simple Value Parameters from POSTed data

    - by Rick Strahl
    In testing out various features of Web API I've found a few oddities in the way that the serialization is handled. These are probably not super common but they may throw you for a loop. Here's what I found. Simple Parameters from Xml or JSON Content Web API makes it very easy to create action methods that accept parameters that are automatically parsed from XML or JSON request bodies. For example, you can send a JavaScript JSON object to the server and Web API happily deserializes it for you. This works just fine:public string ReturnAlbumInfo(Album album) { return album.AlbumName + " (" + album.YearReleased.ToString() + ")"; } However, if you have methods that accept simple parameter types like strings, dates, number etc., those methods don't receive their parameters from XML or JSON body by default and you may end up with failures. Take the following two very simple methods:public string ReturnString(string message) { return message; } public HttpResponseMessage ReturnDateTime(DateTime time) { return Request.CreateResponse<DateTime>(HttpStatusCode.OK, time); } The first one accepts a string and if called with a JSON string from the client like this:var client = new HttpClient(); var result = client.PostAsJsonAsync<string>(http://rasxps/AspNetWebApi/albums/rpc/ReturnString, "Hello World").Result; which results in a trace like this: POST http://rasxps/AspNetWebApi/albums/rpc/ReturnString HTTP/1.1Content-Type: application/json; charset=utf-8Host: rasxpsContent-Length: 13Expect: 100-continueConnection: Keep-Alive "Hello World" produces… wait for it: null. Sending a date in the same fashion:var client = new HttpClient(); var result = client.PostAsJsonAsync<DateTime>(http://rasxps/AspNetWebApi/albums/rpc/ReturnDateTime, new DateTime(2012, 1, 1)).Result; results in this trace: POST http://rasxps/AspNetWebApi/albums/rpc/ReturnDateTime HTTP/1.1Content-Type: application/json; charset=utf-8Host: rasxpsContent-Length: 30Expect: 100-continueConnection: Keep-Alive "\/Date(1325412000000-1000)\/" (yes still the ugly MS AJAX date, yuk! This will supposedly change by RTM with Json.net used for client serialization) produces an error response: The parameters dictionary contains a null entry for parameter 'time' of non-nullable type 'System.DateTime' for method 'System.Net.Http.HttpResponseMessage ReturnDateTime(System.DateTime)' in 'AspNetWebApi.Controllers.AlbumApiController'. An optional parameter must be a reference type, a nullable type, or be declared as an optional parameter. Basically any simple parameters are not parsed properly resulting in null being sent to the method. For the string the call doesn't fail, but for the non-nullable date it produces an error because the method can't handle a null value. This behavior is a bit unexpected to say the least, but there's a simple solution to make this work using an explicit [FromBody] attribute:public string ReturnString([FromBody] string message) andpublic HttpResponseMessage ReturnDateTime([FromBody] DateTime time) which explicitly instructs Web API to read the value from the body. UrlEncoded Form Variable Parsing Another similar issue I ran into is with POST Form Variable binding. Web API can retrieve parameters from the QueryString and Route Values but it doesn't explicitly map parameters from POST values either. Taking our same ReturnString function from earlier and posting a message POST variable like this:var formVars = new Dictionary<string,string>(); formVars.Add("message", "Some Value"); var content = new FormUrlEncodedContent(formVars); var client = new HttpClient(); var result = client.PostAsync(http://rasxps/AspNetWebApi/albums/rpc/ReturnString, content).Result; which produces this trace: POST http://rasxps/AspNetWebApi/albums/rpc/ReturnString HTTP/1.1Content-Type: application/x-www-form-urlencodedHost: rasxpsContent-Length: 18Expect: 100-continue message=Some+Value When calling ReturnString:public string ReturnString(string message) { return message; } unfortunately it does not map the message value to the message parameter. This sort of mapping unfortunately is not available in Web API. Web API does support binding to form variables but only as part of model binding, which binds object properties to the POST variables. Sending the same message as in the previous example you can use the following code to pick up POST variable data:public string ReturnMessageModel(MessageModel model) { return model.Message; } public class MessageModel { public string Message { get; set; }} Note that the model is bound and the message form variable is mapped to the Message property as would other variables to properties if there were more. This works but it's not very dynamic. There's no real easy way to retrieve form variables (or query string values for that matter) in Web API's Request object as far as I can discern. Well only if you consider this easy:public string ReturnString() { var formData = Request.Content.ReadAsAsync<FormDataCollection>().Result; return formData.Get("message"); } Oddly FormDataCollection does not allow for indexers to work so you have to use the .Get() method which is rather odd. If you're running under IIS/Cassini you can always resort to the old and trusty HttpContext access for request data:public string ReturnString() { return HttpContext.Current.Request.Form["message"]; } which works fine and is easier. It's kind of a bummer that HttpRequestMessage doesn't expose some sort of raw Request object that has access to dynamic data - given that it's meant to serve as a generic REST/HTTP API that seems like a crucial missing piece. I don't see any way to read query string values either. To me personally HttpContext works, since I don't see myself using self-hosted code much.© Rick Strahl, West Wind Technologies, 2005-2012Posted in Web Api   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Search SharePoint ULS Logs with Windows

    - by djeeg
    Hi, I have a standard/new Windows 2008 R2 install with SharePoint 2010 and am looking for a SharePoint expectation that occurred sometime during the last week. So I open windows explorer, then go to the logs directory (C:\Program Files\Common Files\Microsoft Shared\Web Server Extensions\14\LOGS). In the toolbar I can enter some search text (exception) or i can Ctrl-F which puts my cursor in the same search box. First it searches the filenames, comes back with no results, and then i click File Contents. And it comes back with still no results. Now I think, maybe there are no errors, so i search for something that I know is in the log (w3wp), still no results. In previous versions of windows, i could usually fix this by making *.log files read as text. But apparently (http://technet.microsoft.com/en-us/library/cc725753(WS.10).aspx) *.log files should already be read as text. Any idea how to make the search, really search log files. I would prefer a solution that did not involve installing any 3rd party software (eg like ULSViewer), but registry/powershell settings are okay.

    Read the article

  • List the root contexts in LDAP

    - by Lennart Schedin
    I would like to list or search the root context(s) in a LDAP tree. I use Apache Directory Server and Java: Hashtable<String, String> contextParams = new Hashtable<String, String>(); contextParams.put("java.naming.provider.url", "ldap://localhost:10389"); contextParams.put("java.naming.security.principal", "uid=admin,ou=system"); contextParams.put("java.naming.security.credentials", "secret"); contextParams.put("java.naming.security.authentication", "simple"); contextParams.put("java.naming.factory.initial", "com.sun.jndi.ldap.LdapCtxFactory"); DirContext dirContext = new InitialDirContext(contextParams); NamingEnumeration<NameClassPair> resultList; //Works resultList = dirContext.list("ou=system"); while (resultList.hasMore()) { NameClassPair result = resultList.next(); System.out.println(result.getName()); } //Does not work resultList = dirContext.list(""); while (resultList.hasMore()) { NameClassPair result = resultList.next(); System.out.println(result.getName()); } I can list the sub nodes of ou=system. But I cannot list the sub nodes of the actual root node. I would like to have this list just like Apache Directory Studio can:

    Read the article

  • HTML table headers always visible at top of window when viewing a large table

    - by Craig McQueen
    I would like to be able to "tweak" an HTML table's presentation to add a single feature: when scrolling down through the page so that the table is on the screen but the header rows are off-screen, I would like the headers to remain visible at the top of the viewing area. This would be conceptually like the "freeze panes" feature in Excel. However, an HTML page might contain several tables in it and I only would want it to happen for the table that is currently in-view, only while it is in-view. Note: I've seen one solution where the table data area is made scrollable while the headers do not scroll. That's not the solution I'm looking for.

    Read the article

  • Checking for Repeated Strings in 2d list

    - by Zach Santiago
    i have a program where i have a list of names and classes. i have the list in alphabetical order. now im trying to check if names repeat, add the classes to one single name. im trying to write some code like go through names if name is already in list, add the class to the one name. so an example would be, instead of having 'Anita ','phys 1443', and 'Anita','IE 3312' i would just have 'Anita','PHYS 1443','IE 3312'. How would i go about doing this in a logival way, WITHOUT using any sort of built in functions? i tried comparing indexe's like if list[i][0] == list[i+1][0], append list[i+1][1] to an emptylist. while that almost worked, it would screw up at some points along the way. here is my attempt size = len(c) i = 0 c = [['Anita', 'PHYS 1443'], ['Anita', 'IE 3312'], ['Beihuang', 'PHYS 1443'], ['Chiao-Lin', 'MATH 1426'], ['Chiao-Lin', 'IE 3312'], ['Christopher', 'CSE 1310'], ['Dylan', 'CSE 1320'], ['Edmund', 'PHYS 1443'], ['Ian', 'IE 3301'], ['Ian', 'CSE 1320'], ['Ian', 'PHYS 1443'], ['Isis', 'PHYS 1443'], ['Jonathan', 'MATH 2325'], ['Krishna', 'MATH 2325'], ['Michael', 'IE 3301'], ['Nang', 'MATH 2325'], ['Ram', 'CSE 1320'], ['Taesu', 'CSE 1320'], ["Tre'Shaun", 'IE 3312'], ["Tre'Shaun", 'MATH 2325'], ["Tre'Shaun", 'CSE 1310']] ## Check if any names repeat d.append(c[0][0]) while i < size - 1 : if c[i][0] == c[i+1][0] : d.append(c[i][1]) d.append(c[i+1][1]) else : d.append(c[i+1][0]) d.append(c[i+1][1]) i = i + 1 print d output was. ['Anita', 'PHYS 1443', 'IE 3312', 'Beihuang', 'PHYS 1443', 'Chiao-Lin', 'MATH 1426', 'MATH 1426', 'IE 3312', 'Christopher', 'CSE 1310', 'Dylan', 'CSE 1320', 'Edmund', 'PHYS 1443', 'Ian', 'IE 3301', 'IE 3301', 'CSE 1320', 'CSE 1320', 'PHYS 1443', 'Isis', 'PHYS 1443', 'Jonathan', 'MATH 2325', 'Krishna', 'MATH 2325', 'Michael', 'IE 3301', 'Nang', 'MATH 2325', 'Ram', 'CSE 1320', 'Taesu', 'CSE 1320', "Tre'Shaun", 'IE 3312', 'IE 3312', 'MATH 2325', 'MATH 2325', 'CSE 1310']

    Read the article

  • RichFaces rich:insert takes a long time to output large files

    - by Mark Lewis
    Hello I'm using a RichFaces <rich:insert like this: <rich:panel header="my head"> <a4j:outputPanel ajaxRendered="true"> <rich:insert src="#{MyBacking.myPath}" highlight="groovy" /> </a4j:outputPanel> </rich:panel> If I have a 60k file to output, it takes 23 seconds. I've got a requirement to output the contents of some larger files than that and obviously the larger the file, the larger the wait for content. The recommendation in the answer to another related question is to introduce paging. I will, but the question is, why does it take so long to output 60k of text using JSF/RichFaces? That is, reading off a local disk with Windows XP SP2 PC - I can see from the log the data has already been written to disk from the network. Other scripting languages appear to be faster than this - is it something to do with the JSF lifecycle having to handle the text maybe? Thanks

    Read the article

  • How to bind data from a view of type List<List<MyViewModelClass>>?

    - by Robert Koritnik
    I have a strong type view of type List<List<MyViewModelClass>> The outer list will always have two lists of List<MyViewModelClass>. For each of the two outer lists I want to display a group of checkboxes. Each set can have an arbitrary number of choices. My view model class looks similar to this: public class MyViewModelClass { public Area Area { get; set; } public bool IsGeneric { get; set; } public string Code { get; set; } public bool IsChecked { get; set; } } So the final view will look something like: Please select those that apply: First set of choices: x Option 1 x Option 2 x Option 3 etc. Second set of choices: x Second Option 1 x Second Option 2 x Second Option 3 x Second Option 4 etc. Checkboxes should display MyViewModelClass.Area.Name, and their value should be related to MyViewModelClass.Area.Id. Checked state is of course related to MyViewModel.IsChecked. Question I wonder how should I use Html.CheckBox() or Html.CheckBoxFor() helper to display my checkboxes? I have to get these values back to the server on a postback of course. If it makes things simpler, I could make a separate view model type like: public class Options { public List<MyViewModelClass> General { get; set; } public List<MyViewModelClass> Others { get; set; } }

    Read the article

  • Read-only view of a Java list with more general type parameter

    - by Michael Rusch
    Suppose I have class Foo extends Superclass. I understand why I can't do this: List<Foo> fooList = getFooList(); List<Superclass> supList = fooList; But, it would seem reasonable for me to do that if supList were somehow "read-only". Then, everything would be consistent as everything that would come out of an objList would be a Foo, which is a Superclass. I could probably write a List implementation that would take an underlying list and a more general type parameter, and would then return everything as the more general type instead of the specific type. It would work like the return of Collections.unmodifiableList() except that the type would be made more general. Is there an easier way? The reason I'm considering doing this is that I am implementing an interface that requires that I return an (unmodifiable) List<Superclass>, but internally I need to use Foos, so I have a List<Foo>. I can't just cast.

    Read the article

  • How do You Get a Specific Value From a System.Data.DataTable Object?

    - by Giffyguy
    I'm a low-level algorithm programmer, and databases are not really my thing - so this'll be a n00b question if ever there was one. I'm running a simple SELECT query through our development team's DAO. The DAO returns a System.Data.DataTable object containing the results of the query. This is all working fine so far. The problem I have run into now: I need to pull a value out of one of the fields of the first row in the resulting DataTable - and I have no idea where to even start. Microsoft is so confusing about this! Arrrg! Any advice would be appreciated. I'm not providing any code samples, because I believe that context is unnecessary here. I'm assuming that all DataTable objects work the same way, no matter how you run your queries - and therefore any additional information would just make this more confusing for everyone.

    Read the article

  • What are the "Navigation Properties" in this data model for?

    - by d03boy
    I've been wondering how to properly set up many-to-many relationships in ASP.NET MVC 2 using Linq2Sql for quite some time now. I found this blog post that seems to have a similar model layout as mine. If you take a look at the first screenshot showing the data model you can see that each model has "Navigation Properties" at the bottom of it. What exactly is this and why don't my models have them? I have the proper foreign keys put in to place. Most specifically, I am looking at the relationship between the Article and Category models since that is the only many-to-many relationship that I see and that's what I'm trying to model. Obviously I use an intermediary joining table between these two tables but I am having trouble understanding the proper methodology for modeling that relationship and I'm not finding this information anywhere on The Google.

    Read the article

< Previous Page | 279 280 281 282 283 284 285 286 287 288 289 290  | Next Page >