Search Results

Search found 88714 results on 3549 pages for 'data type'.

Page 927/3549 | < Previous Page | 923 924 925 926 927 928 929 930 931 932 933 934  | Next Page >

  • Efficient alternative to merge() when building dataframe from json files with R?

    - by Bryan
    I have written the following code which works, but is painfully slow once I start executing it over thousands of records: require("RJSONIO") people_data <- data.frame(person_id=numeric(0)) json_data <- fromJSON(json_file) n_people <- length(json_data) for(lender in 1:n_people) { person_dataframe <- as.data.frame(t(unlist(json_data[[person]]))) people_data <- merge(people_data, person_dataframe, all=TRUE) } output_file <- paste("people_data",".csv") write.csv(people_data, file=output_file) I am attempting to build a unified data table from a series of json-formated files. The fromJSON() function reads in the data as lists of lists. Each element of the list is a person, which then contains a list of the attributes for that person. For example: [[1]] person_id name gender hair_color [[2]] person_id name location gender height [[...]] structure(list(person_id = "Amy123", name = "Amy", gender = "F", hair_color = "brown"), .Names = c("person_id", "name", "gender", "hair_color")) structure(list(person_id = "matt53", name = "Matt", location = structure(c(47231, "IN"), .Names = c("zip_code", "state")), gender = "M", height = 172), .Names = c("person_id", "name", "location", "gender", "height")) The end result of the code above is matrix where the columns are every person-attribute that appears in the structure above, and the rows are the relevant values for each person. As you can see though, some data is missing for some of the people, so I need to ensure those show up as NA and make sure things end up in the right columns. Further, location itself is a vector with two components: state and zip_code, meaning it needs to be flattened to location.state and location.zip_code before it can be merged with another person record; this is what I use unlist() for. I then keep the running master table in people_data. The above code works, but do you know of a more efficient way to accomplish what I'm trying to do? It appears the merge() is slowing this to a crawl... I have hundreds of files with hundreds of people in each file. Thanks! Bryan

    Read the article

  • PerformancePoint dashboard permissions problem in MOSS

    - by Nathan DeWitt
    I have a PerformancePoint dashboard running in MOSS 2007 portal. The dashboard consists of one SSRS 2005 report, running in SharePoint Integrated mode. NT Authority\Authenticated Users have read permissions to the report library containing the SSRS report, the dashboard, and the report library containing the dashboard. Users that attempt to access the dashboard receive the following error message: The permissions granted to user 'DOMAIN\firstname.lastname' are insufficient for performing this operation. (rsAccessDenied) Users that then click on the direct link to the report in MOSS will see the report with no problem. Subsequent visits to the dashboard show the report with no problem. The report is using a data source that is located one folder up from the report location. The report has been updated to point to the correct shared data source after deployment. Both the report and the data source have been published. The data source is using stored credentials, with a domain service account that has been set to Use as Windows credentials. This service account is serving other reports in other areas with no problem. Edit: Ok, I've gotten a lot more information on this problem. The request is never actually being made to the data source. The user comes in to the dashboard and requests a report for the first time using their kerberos token identifying themselves. The report looks in the Report Server database and finds that they are not listed in the users table and generates this rsAccessDenied error. Once they view the report directly their name is in this table and they never have the problem again. Unfortunately, removing the user from the Users table in the RS database doesn't actually cause this error to happen again. Everything I've read says that when you run a Report Server in MOSS integrated mode all your permissions are handled at the MOSS report library level, and all Auth users have permissions to the report library, as stated earlier. Any ideas?

    Read the article

  • Function parameter types in Python

    - by Leif Andersen
    Unless I'm mistaken, creating a function in python works like this def my_func(param1, param2): /*stuff*/ However, you don't actually give the types of those parameters. Also, if I remember, python is a strongly typed language, as such, it seams like python shouldn't let you pass in a parameter of a different type then the function creator expected. However, how does python know that the user of the function is passing in the proper types? Or will the program just die if it's the wrong type, assuming the function actually uses the parameter? Or do you have to specify the type/I'm missing something? Thank you.

    Read the article

  • connect to web API from ..NET

    - by Saif Khan
    How can I access and consume a web API from .NET? The API is not a .NET API. Here is sample code I have in Ruby require 'uri' require 'net/http' url = URI.parse("http://account.codebasehq.com/widgets/tickets") req = Net::HTTP::Post.new(url.path) req.basic_auth('dave', '6b2579a03c2e8825a5fd0a9b4390d15571f3674d') req.add_field('Content-type', 'application/xml') req.add_field('Accept', 'application/xml') xml = "<ticket><summary>My Example Ticket</summary><status-id>1234</status-id><priority-id>1234</priority-id><ticket-type>bug</ticket-type></ticket>" res = Net::HTTP.new(url.host, url.port).start {|http| http.request(req, xml)} case res when Net::HTTPCreated puts "Record was created successfully." else puts "An error occurred while adding this record" end Where can I find information on consuming API like this from .NET? I am aware how to use .NET webservices.

    Read the article

  • Converting NSMutableData to NSString Problem

    - by Sheehan Alam
    initWithData does not convert my data object into a string properly. When I check the length of the data object, it has a value. NSMutableData* receivedData =[[NSMutableData data] retain]; NSString* json_string = [[NSString alloc] initWithData:receivedData encoding:NSUTF8StringEncoding]; Am I doing something wrong creating the string?

    Read the article

  • Why image button dont work with forms in IExplorer 7+ ??? (Html)

    - by DomingoSL
    Hello guys, i have a web form who send post variables like: <form action="teacher.php" method="post"> <input name="pass" type="password"> <input name="quiere" type="image" value="submit" src="IMG/unlock-32.png" /> </from> In the same page i check for a submit acction with php doing a simple isset check like: "if (isset($_POST['quiere'])) {" But if you do this in IE the post var "QUIERE" (the button var) does not post, the others vars are fine, and if you try this simple form in any other browser it works. I only get this form function well in IE changing the button for a normal button, instead of a image button like: <input name="quiere" type="submit" value="submit" /> In this way, the var "quiere" get post. So, what do you think? and sorry for my english.

    Read the article

  • SQL Server CLR Integration to acheive Encryption/Decryption

    - by Aakash
    I have a requirement to store the data in encrypted form in database tables. I want to do it at database level but the problems I am facing: ( a) Data Type of the field should be Varbinary. ( b) Encryption is not supported by Workgroup edition ( c) Is it possible to encrypt Numeric Fields? I want to access the encrypted data in tables to fetch in views and stored procedure for some processing but due to above problems I am not able to. Here is my Environment: Development Platform - ASP.Net,.Net Framework 3.5,Visual studio 2008 Server Operating System - Windows Server 2008 Database - SQL Server 2008 Work group edition I was also thinking to adopt a different approach to resolve this issue (yet to test it's feasibility). I was just wondering if I could create a CLR function (which could take parameters to encrypt and decrypt data using Cryptography types provided in .Net framework) and use the CLR integration feature of SQL Server and call that function from stored procedure and views. I am not sure if I am thinking in right direction? Any advice on this as well please.

    Read the article

  • jQuery: Traversing AJAX response in Chrome/Safari

    - by jitzo
    I'm trying to traverse an AJAX response, which contains a remote web page (an HTML output). My goal is to iterate through the 'script', 'link', and 'title' elements of the remote page - load them if necessary, and embed its contents to the current page. Its working great in FF/IE, but for some reason - Chrome & Safari behaves differently: When I run a .each() loop on the response, Chrome/Safari seems to omit everything that is under the section of the page. Here's my current code: $.ajax({ url: 'remoteFile.php', cache: false, dataFilter: function(data) { console.log(data); /* The output seems to contain the entire response, including the <head> section - on all browsers, including Chrome/Safari */ $(data).filter("link, script, title").each(function(i) { console.log($(this)); /* IE/FF outputs all of the link/script/title elements, Chrome will output only those that are not in the <head> section */ }); console.log($(data)); /* This also outputs the incomplete structure on Chrome/Safari */ return data; }, success: function(response) {} }); I've been struggling with this problem for quite a while now, i've found some other similar cases on google searches, but no real solution. This happens on both jQuery 1.4.2, and jQuery 1.3.2. I really don't want to parse the response with .indexOf() and .substring() - it seems to me that it will be an overkill for the client. Many thanks in advance!

    Read the article

  • Refresh a UITableView after loading

    - by Alavoil
    I am trying to load a table view from a cache very quickly and have the cached data in the table view appear. Then I want download new data, and then reload the table. Right now I am downloading the new data on viewDidAppear, but the view still refreshes before it displays. Any idea how I can do this?

    Read the article

  • Implementing IPermission

    - by joniba
    Hi there, I'm considering writing a custom IPermission implementation but am not clear as to how it should work. I've already implemented IPrincipal and IIdentity. What I would like to accomplish is to be able to check permissions vis a vis the current IPrincipal's authorizations rather than its roles. Generally, authorizations are read/add-delete/update/none for a given type. So I would like to write something like this: [CustomPermission(SecurityAction.Demand, Type = typeof(Foo), MinimumAuthorization = AuthorizationFlags.Read)] public void SomeMethod(){} Is this possible? I've looked over the code at http://msdn.microsoft.com/en-us/library/system.security.ipermission.aspx. This shows how to implement the interface, but I don't understand how the above pseudo-code would be able to check against the IPrincipal's authorizations for the type. Any help or resource references would be appreciated. Joni

    Read the article

  • Div positioning on a canvas with arrows bindings

    - by ideotop
    I'm searching for a little piece of javascript that can attach to a div (in data-x data-y attributes for example) the x,y coordinates of a draggable div. And then be able to draw arrows between them (using a data-targetid attribute?). If the solution can look like a simple version of graphview, jsPlumb or vizster but without the complicated library, and only in javascript/html

    Read the article

  • Excel Macro to copy an entire row from one sheet to another based upon a single word, within a parag

    - by jason
    Guys i'm looking for a simple excel macro that can copy a row from one sheet to another within excel based upon having a specific word in the cell. I have a sheet one, called "data" and a sheet two called "final". Here is an eaxmple of the data A B C D john mary 555.555.4939 initial reply to phone conversation Jim jack 555.555.5555 floor estimate for bathroom jerry kim 555.555.5553 initial response to phone call I'd like to copy than entire row from sheet "data" to a sheet "final" if the data in column D contains either the word "reply" or the word "response" somewhere within the paragraph. Any tips would be much obliged. thanks, J

    Read the article

  • Performance benefits of upgrading Richfaces to newer version

    - by peteDog
    I have a client that's running an application based on JBoss 4.0.5, Seam 1.2 and RichFaces 3.0.1. Their system is having performance problems due to the fact that a lot of data is coming back from the server to be displayed on screen and it seems like the rendering of that data is taking forever. The data brought back is displayed in a tabbed interface, but the tabs aren't currently being loaded individually, but all at once. I'm trying to build up a case to present to the client on the benefits of upgrading to never version of RichFaces, which, as I understand it, has added a great number of features related to tabbed panels and being able to use ajax to page the data and load the chunks you actually need to display at the moment, and not the rest that's in other tabs. The move to a newer version of RichFaces will also result in never versions of Jboss and Seam, as the current production build of RichFaces 3.2.1 requires JSF 1.2. IF anyone has some suggestions or experience on performance of current versions RichFaces, paging, etc, I would really appreciate some feedback.

    Read the article

  • Beginner problems with references to arrays in python 3.1.1

    - by Protean
    As part of the last assignment in a beginner python programing class, I have been assigned a traveling sales man problem. I settled on a recursive function to find each permutation and the sum of the distances between the destinations, however, I am have a lot of problems with references. Arrays in different instances of the Permute and Main functions of TSP seem to be pointing to the same reference. from math import sqrt class TSP: def __init__(self): self.CartisianCoordinates = [['A',[1,1]], ['B',[2,2]], ['C',[2,1]], ['D',[1,2]], ['E',[3,3]]] self.Array = [] self.Max = 0 self.StoredList = ['',0] def Distance(self, i1, i2): x1 = self.CartisianCoordinates[i1][1][0] y1 = self.CartisianCoordinates[i1][1][1] x2 = self.CartisianCoordinates[i2][1][0] y2 = self.CartisianCoordinates[i2][1][1] return sqrt(pow((x2 - x1), 2) + pow((y2 - y1), 2)) def Evaluate(self): temparray = [] Data = [] for i in range(len(self.CartisianCoordinates)): Data.append([]) for i1 in range(len(self.CartisianCoordinates)): for i2 in range(len(self.CartisianCoordinates)): if i1 != i2: temparray.append(self.Distance(i1, i2)) else: temparray.append('X') Data[i1] = temparray temparray = [] self.Array = Data self.Max = len(Data) def Permute(self,varray,index,vcarry,mcarry): #Problem Class array = varray[:] carry = vcarry[:] for i in range(self.Max): print ('ARRAY:', array) print (index,i,carry,array[index][i]) if array[index][i] != 'X': carry[0] += self.CartisianCoordinates[i][0] carry[1] += array[index][i] if len(carry) != self.Max: temparray = array[:] for j in range(self.Max):temparray[j][i] = 'X' index = i mcarry += self.Permute(temparray,index,carry,mcarry) else: return mcarry print ('pass',mcarry) return mcarry def Main(self): out = [] self.Evaluate() for i in range(self.Max): array = self.Array[:] #array appears to maintain the same reference after each copy, resulting in an incorrect array being passed to Permute after the first iteration. print (self.Array[:]) for j in range(self.Max):array[j][i] = 'X' print('I:', i, array) out.append(self.Permute(array,i,[str(self.CartisianCoordinates[i][0]),0],[])) return out SalesPerson = TSP() print(SalesPerson.Main()) It would be greatly appreciated if you could provide me with help in solving the reference problems I am having. Thank you.

    Read the article

  • Google Analytics API Authentication Speedup

    - by Paulo
    I'm using a Google Analytics API Class in PHP made by Doug Tan to retrieve Analytics data from a specific profile. Check the url here: http://code.google.com/intl/nl/apis/analytics/docs/gdata/gdataArticlesCode.html When you create a new instance of the class you can add the profile id, your google account + password, a daterange and whatever dimensions and metrics you want to pick up from analytics. For example i want to see how many people visited my website from different country's in 2009. //make a new instance from the class $ga = new GoogleAnalytics($email,$password); //website profile example id $ga->setProfile('ga:4329539'); //date range $ga->setDateRange('2010-02-01','2010-03-08'); //array to receive data from metrics and dimensions $array = $ga->getReport( array('dimensions'=>('ga:country'), 'metrics'=>('ga:visits'), 'sort'=>'-ga:visits' ) ); Now you know how this API class works, i'd like to adress my problem. Speed. It takes alot of time to retrieve multiple types of data from the analytics database, especially if you're building different arrays with different metrics/dimensions. How can i speed up this process? Is it possible to store all the possible data in a cache so i am able to retrieve the data without loading it over and over again?

    Read the article

  • SQL Server 2005 standard filegroups / files for performance on SAN

    - by Blootac
    Ok so I've just been on a SQL Server course and we discussed the usage scenarios of multiple filegroups and files when in use over local RAID and local disks but we didn't touch SAN scenarios so my question is as follows; I currently have a 250 gig database running on SQL Server 2005 where some tables have a huge number of writes and others are fairly static. The database and all objects reside in a single file group with a single data file. The log file is also on the same volume. My interpretation is that separate data files should be used across different disks to lessen disk contention and that file groups should be used for partitioning of data. However, with a SAN you obviously don't really have the same issue of disk contention that you do with a small RAID setup (or at least we don't at the moment), and standard edition doesn't support partitioning. So in order to improve parallelism what should I do? My understanding of various Microsoft publications is that if I increase the number of data files, separate threads can act across each file separately. Which leads me to the question how many files should I have. One per core? Should I be putting tables and indexes with high levels of activity in separate file groups, each with the same number of data files as we have cores? Thank you

    Read the article

  • cancel stream request from WCF server to client

    - by ArsenMkrt
    Hi, I posted about stream request here [wcf-chunk-data-with-stream]:http://stackoverflow.com/questions/853448/wcf-chunk-data-with-stream I solved that task but now when i close request in client part server continue to send data. is it possible to cancel stream request from WCF server to client?

    Read the article

  • SQL Query with ORDER BY Part 2

    - by Brett
    Hi SQL'ers, This is a followup question to: SQL Query with ORDER BY But I think the SQL logic is going to be quite different, so I am posting it as separate question. I am trying to extend my sql SELECT query it and having some trouble: I have the table: id type radius ------------------------- 1 type1 0.25 2 type2 0.59 3 type1 0.26 4 type1 0.78 5 type3 0.12 6 type2 0.45 7 type3 0.22 8 type3 0.98 and I am trying to learn how to SELECT the second smallest radius for each given type. So the returned recordset should look like: id type radius ------------------------- 3 type1 0.26 2 type2 0.59 7 type3 0.22 (Note: in the referenced question, I was looking for the lowest radius, not the second lowest radius). I am assuming I have to use LIMIT and OFFSET, but if I use the MIN() won't that return a distinct record containing the minimum radius? Does anyone have any thoughts on how to attack this? Many thanks, Brett

    Read the article

  • Sync Vs. Async Sockets Performance in .NET

    - by Michael Covelli
    Everything that I read about sockets in .NET says that the asynchronous pattern gives better performance (especially with the new SocketAsyncEventArgs which saves on the allocation). I think this makes sense if we're talking about a server with many client connections where its not possible to allocate one thread per connection. Then I can see the advantage of using the ThreadPool threads and getting async callbacks on them. But in my app, I'm the client and I just need to listen to one server sending market tick data over one tcp connection. Right now, I create a single thread, set the priority to Highest, and call Socket.Receive() with it. My thread blocks on this call and wakes up once new data arrives. If I were to switch this to an async pattern so that I get a callback when there's new data, I see two issues The threadpool threads will have default priority so it seems they will be strictly worse than my own thread which has Highest priority. I'll still have to send everything through a single thread at some point. Say that I get N callbacks at almost the same time on N different threadpool threads notifying me that there's new data. The N byte arrays that they deliver can't be processed on the threadpool threads because there's no guarantee that they represent N unique market data messages because TCP is stream based. I'll have to lock and put the bytes into an array anyway and signal some other thread that can process what's in the array. So I'm not sure what having N threadpool threads is buying me. Am I thinking about this wrong? Is there a reason to use the Async patter in my specific case of one client connected to one server?

    Read the article

  • Cat5 vs Cat5e vs Cat6 cable confusion

    - by David Hayes
    I'm just about to move house so I'm going to have to disconnect and re-wire my network. Pretty much all the devices I have support gigabit ethernet should I go out an buy some decent network cables (and if so what type) or should I continue using my mix of cat5 cables I've acquired over the years. Does the type of cable really make a difference to my LAN performance??

    Read the article

  • Relational vs. Dimensional Databases, what's the difference?

    - by grautur
    I'm trying to learn about OLAP and data warehousing, and I'm confused about the difference between relational and dimensional modeling. Is dimensional modeling basically relational modeling, but allowing for redundant/un-normalized data? For example, let's say I have historical sales data on (product, city, # sales). I understand that the following would be a relational point-of-view: Product | City | # Sales Apples, San Francisco, 400 Apples, Boston, 700 Apples, Seattle, 600 Oranges, San Francisco, 550 Oranges, Boston, 500 Oranges, Seattle, 600 While the following is a more dimensional point-of-view: Product | San Francisco | Boston | Seattle Apples, 400, 700, 600 Oranges, 550, 500, 600 But it seems like both points of view would nonetheless be implemented in an identical star schema: Fact table: Product ID, Region ID, # Sales Product dimension: Product ID, Product Name City dimension: City ID, City Name And it's not until you start adding some additional details to each dimension that the differences start popping up. For instance, if you wanted to track regions as well, a relational database would tend to have a separate region table, in order to keep everything normalized: City dimension: City ID, City Name, Region ID Region dimension: Region ID, Region Name, Region Manager, # Regional Stores While a dimensional database would allow for denormalization to keep the region data inside the city dimension, in order to make it easier to slice the data: City dimension: City ID, City Name, Region Name, Region Manager, # Regional Stores Is this correct?

    Read the article

  • How does Subsonic handle connections?

    - by Quintin Par
    In Nhibernate you start a session by creating it during a BeginRequest and close at EndRequest public class Global: System.Web.HttpApplication { public static ISessionFactory SessionFactory = CreateSessionFactory(); protected static ISessionFactory CreateSessionFactory() { return new Configuration() .Configure(Path.Combine(AppDomain.CurrentDomain.BaseDirectory, "hibernate.cfg.xml")) .BuildSessionFactory(); } public static ISession CurrentSession { get{ return (ISession)HttpContext.Current.Items["current.session"]; } set { HttpContext.Current.Items["current.session"] = value; } } protected void Global() { BeginRequest += delegate { CurrentSession = SessionFactory.OpenSession(); }; EndRequest += delegate { if(CurrentSession != null) CurrentSession.Dispose(); }; } } What’s the equivalent in Subsonic? The way I understand, Nhibernate will close all the connections at endrequest. Reason: While trouble shooting some legacy code in a Subsonic project I get a lot of MySQL timeouts,suggesting that the code is not closing the connections MySql.Data.MySqlClient.MySqlException: error connecting: Timeout expired. The timeout period elapsed prior to obtaining a connection from the pool. This may have occurred because all pooled connections were in use and max pool size was reached. Generated: Tue, 11 Aug 2009 05:26:05 GMT System.Web.HttpUnhandledException: Exception of type 'System.Web.HttpUnhandledException' was thrown. --- MySql.Data.MySqlClient.MySqlException: error connecting: Timeout expired. The timeout period elapsed prior to obtaining a connection from the pool. This may have occurred because all pooled connections were in use and max pool size was reached. at MySql.Data.MySqlClient.MySqlPool.GetConnection() at MySql.Data.MySqlClient.MySqlConnection.Open() at SubSonic.MySqlDataProvider.CreateConnection(String newConnectionString) at SubSonic.MySqlDataProvider.CreateConnection() at SubSonic.AutomaticConnectionScope..ctor(DataProvider provider) at SubSonic.MySqlDataProvider.GetReader(QueryCommand qry) at SubSonic.DataService.GetReader(QueryCommand cmd) at SubSonic.ReadOnlyRecord`1.LoadByParam(String columnName, Object paramValue) My connection string is as follows <connectionStrings> <add name="xx" connectionString="Data Source=xx.net; Port=3306; Database=db; UID=dbuid; PWD=xx;Pooling=true;Max Pool Size=12;Min Pool Size=2;Connection Lifetime=60" /> </connectionStrings>

    Read the article

  • HELP! WebClient.UploadFile() throws exception while uploading files to sharepoint

    - by Royson
    In my application i am uploading files to sharepoint 2007. I am using using (WebClient webClient = new WebClient()) { webClient.Credentials = new NetworkCredential(userName, password); webClient.Headers.Add("Content-Type", "application/x-vermeer-urlencoded"); webClient.Headers.Add("X-Vermeer-Content-Type", "application/x-vermeer-urlencoded"); String result = Encoding.UTF8.GetString(webClient.UploadData(webUrl + "/_vti_bin/_vti_aut/author.dll","POST", data.ToArray())); } the code is running successfully..but for some files it throws exception The underlying connection was closed: The connection was closed unexpectedly. at System.Net.WebClient.UploadDataInternal(Uri address, String method, Byte[] data, WebRequest& request) at System.Net.WebClient.UploadData(Uri address, String method, Byte[] data) at System.Net.WebClient.UploadData(String address, String method, Byte[] data) Any Ideas what I have done wrong? I am using VS-2008 2.0

    Read the article

  • JQuery, ASCX and webmethods not seems to be working

    - by Karthik K
    Hi all, I have a cascading dropdown (3 of them) Type, Categories and Sub Categories. Type loads first and upon selection of Type, Category load and selection of Category, Sub Category loads. Also i have 2 buttons, "Add Category" and "Add Sub Category" Upon clicking on these buttons, i call a JQuery Modal Form to add them. I use Webmethod in code behind to add them to database This works perfectly in ASPX page. Since I need use this in 3-4 pages, i thought of making the above as User control (ASCX). When i try to use this in a webpage, the webmethods in ASCX don't get called. Is my approach correct? what should be done for my scenario lOoking forward for your suggestions. Thanks in advance Karthik

    Read the article

  • Memory mapped files and "soft" page faults. Unavoidable?

    - by Robert Oschler
    I have two applications (processes) running under Windows XP that share data via a memory mapped file. Despite all my efforts to eliminate per iteration memory allocations, I still get about 10 soft page faults per data transfer. I've tried every flag there is in CreateFileMapping() and CreateFileView() and it still happens. I'm beginning to wonder if it's just the way memory mapped files work. If anyone there knows the O/S implementation details behind memory mapped files I would appreciate comments on the following theory: If two processes share a memory mapped file and one process writes to it while another reads it, then the O/S marks the pages written to as invalid. When the other process goes to read the memory areas that now belong to invalidated pages, this causes a soft page fault (by design) and the O/S knows to reload the invalidated page. Also, the number of soft page faults is therefore directly proportional to the size of the data write. My experiments seem to bear out the above theory. When I share data I write one contiguous block of data. In other words, the entire shared memory area is overwritten each time. If I make the block bigger the number of soft page faults goes up correspondingly. So, if my theory is true, there is nothing I can do to eliminate the soft page faults short of not using memory mapped files because that is how they work (using soft page faults to maintain page consistency). What is ironic is that I chose to use a memory mapped file instead of a TCP socket connection because I thought it would be more efficient. Note, if the soft page faults are harmless please note that. I've heard that at some point if the number is excessive, the system's performance can be marred. If soft page faults intrinsically are not significantly harmful then if anyone has any guidelines as to what number per second is "excessive" I'd like to hear that. Thanks.

    Read the article

< Previous Page | 923 924 925 926 927 928 929 930 931 932 933 934  | Next Page >