Search Results

Search found 19055 results on 763 pages for 'high performance'.

Page 673/763 | < Previous Page | 669 670 671 672 673 674 675 676 677 678 679 680  | Next Page >

  • What hash algorithms are parallelizable? Optimizing the hashing of large files utilizing on multi-co

    - by DanO
    I'm interested in optimizing the hashing of some large files (optimizing wall clock time). The I/O has been optimized well enough already and the I/O device (local SSD) is only tapped at about 25% of capacity, while one of the CPU cores is completely maxed-out. I have more cores available, and in the future will likely have even more cores. So far I've only been able to tap into more cores if I happen to need multiple hashes of the same file, say an MD5 AND a SHA256 at the same time. I can use the same I/O stream to feed two or more hash algorithms, and I get the faster algorithms done for free (as far as wall clock time). As I understand most hash algorithms, each new bit changes the entire result, and it is inherently challenging/impossible to do in parallel. Are any of the mainstream hash algorithms parallelizable? Are there any non-mainstream hashes that are parallelizable (and that have at least a sample implementation available)? As future CPUs will trend toward more cores and a leveling off in clock speed, is there any way to improve the performance of file hashing? (other than liquid nitrogen cooled overclocking?) or is it inherently non-parallelizable?

    Read the article

  • Slow query. Wrong database structure?

    - by Tin
    I have a database with table that contains tasks. Tasks have a lifecycle. The status of the task's lifecycle can change. These state transitions are stored in a separate table tasktransitions. Now I wrote a query to find all open/reopened tasks and recently changed tasks but I already see with a rather small number of tasks (<1000) that execution time has becoming very long (0.5s). Tasks +-------------+---------+------+-----+---------+----------------+ | Field | Type | Null | Key | Default | Extra | +-------------+---------+------+-----+---------+----------------+ | taskid | int(11) | NO | PRI | NULL | auto_increment | | description | text | NO | | NULL | | +-------------+---------+------+-----+---------+----------------+ Tasktransitions +------------------+-----------+------+-----+-------------------+----------------+ | Field | Type | Null | Key | Default | Extra | +------------------+-----------+------+-----+-------------------+----------------+ | tasktransitionid | int(11) | NO | PRI | NULL | auto_increment | | taskid | int(11) | NO | MUL | NULL | | | status | int(11) | NO | MUL | NULL | | | description | text | NO | | NULL | | | userid | int(11) | NO | | NULL | | | transitiondate | timestamp | NO | | CURRENT_TIMESTAMP | | +------------------+-----------+------+-----+-------------------+----------------+ Query SELECT tasks.taskid,tasks.description,tasklaststatus.status FROM tasks LEFT OUTER JOIN ( SELECT tasktransitions.taskid,tasktransitions.transitiondate,tasktransitions.status FROM tasktransitions INNER JOIN ( SELECT taskid,MAX(transitiondate) AS lasttransitiondate FROM tasktransitions GROUP BY taskid ) AS tasklasttransition ON tasklasttransition.lasttransitiondate=tasktransitions.transitiondate AND tasklasttransition.taskid=tasktransitions.taskid ) AS tasklaststatus ON tasklaststatus.taskid=tasks.taskid WHERE tasklaststatus.status IS NULL OR tasklaststatus.status=0 or tasklaststatus.transitiondate>'2013-09-01'; I'm wondering if the database structure is best choice performance wise. Could adding indexes help? I already tried to add some but I don't see great improvements. +-----------------+------------+----------------+--------------+------------------+-----------+-------------+----------+--------+------+------------+---------+---------------+ | Table | Non_unique | Key_name | Seq_in_index | Column_name | Collation | Cardinality | Sub_part | Packed | Null | Index_type | Comment | Index_comment | +-----------------+------------+----------------+--------------+------------------+-----------+-------------+----------+--------+------+------------+---------+---------------+ | tasktransitions | 0 | PRIMARY | 1 | tasktransitionid | A | 896 | NULL | NULL | | BTREE | | | | tasktransitions | 1 | taskid_date_ix | 1 | taskid | A | 896 | NULL | NULL | | BTREE | | | | tasktransitions | 1 | taskid_date_ix | 2 | transitiondate | A | 896 | NULL | NULL | | BTREE | | | | tasktransitions | 1 | status_ix | 1 | status | A | 3 | NULL | NULL | | BTREE | | | +-----------------+------------+----------------+--------------+------------------+-----------+-------------+----------+--------+------+------------+---------+---------------+ Any other suggestions?

    Read the article

  • Ruby integer to string key

    - by Gene
    A system I'm building needs to convert non-negative Ruby integers into shortest-possible UTF-8 string values. The only requirement on the strings is that their lexicographic order be identical to the natural order on integers. What's the best Ruby way to do this? We can assume the integers are 32 bits and the sign bit is 0. This is successful: (i >> 24).chr + ((i >> 16) & 0xff).chr + ((i >> 8) & 0xff).chr + (i & 0xff).chr But it appears to be 1) garbage-intense and 2) ugly. I've also looked at pack solutions, but these don't seem portable due to byte order. FWIW, the application is Redis hash field names. Building keys may be a performance bottleneck, but probably not. This question is mostly about the "Ruby way".

    Read the article

  • File size monitoring in C#

    - by manemawanna
    Hello, I work in the Systems & admin team and have been given the task of creating a quota management application to try and encourage users to better manage there resources as we currently have issues with disc space and don't enforce hard quotas. At the moment I'm using the code below to go through all the files in a users homespace to retrieve the overall amount of space they are using. As from what I've seen else where theres no other way to do this in C#, the issue with it is theirs quite a high overhead while it retireves the size of each file then creates a total. try { long dirSize = 0; FileInfo[] FI = new DirectoryInfo("I:\\").GetFiles("*.*", SearchOption.AllDirectories); foreach (FileInfo F1 in FI) { dirSize += F1.Length; } return dirSize; } So I'm looking for a quicker way to do this or a quick way to monitor changes in the size of files while using the options avaliable through FileSystemWatcher. At the moment the only thing I can think of is creating a hashtable containing the file location and size of each file, so when a size changed event occurs I can compare the old size against the new one and update the total. Any suggestions would be greatly appreciated.

    Read the article

  • Better language or checking tool?

    - by rwallace
    This is primarily aimed at programmers who use unmanaged languages like C and C++ in preference to managed languages, forgoing some forms of error checking to obtain benefits like the ability to work in extremely resource constrained systems or the last increment of performance, though I would also be interested in answers from those who use managed languages. Which of the following would be of most value? A language that would optionally compile to CLR byte code or to machine code via C, and would provide things like optional array bounds checking, more support for memory management in environments where you can't use garbage collection, and faster compile times than typical C++ projects. (Think e.g. Ada or Eiffel with Python syntax.) A tool that would take existing C code and perform static analysis to look for things like potential null pointer dereferences and array overflows. (Think e.g. an open source equivalent to Coverity.) Something else I haven't thought of. Or put another way, when you're using C family languages, is the top of your wish list more expressiveness, better error checking or something else? The reason I'm asking is that I have a design and prototype parser for #1, and an outline design for #2, and I'm wondering which would be the better use of resources to work on after my current project is up and running; but I think the answers may be useful for other tools programmers also. (As usual with questions of this nature, if the answer you would give is already there, please upvote it.)

    Read the article

  • Using Lucene to index private data, should I have a separate index for each user or a single index

    - by Nathan Bayles
    I am developing an Azure based website and I want to provide search capabilities using Lucene. (structured json objects would be indexed and stored in Lucene and other content such as Word documents, etc. would be indexed in lucene but stored in blob storage) I want the search to be secure, such that one user would never see a document belonging to another user. I want to allow ad-hoc searches as typed by the user. Lastly, I want to query programmatically to return predefined sets of data, such as "all notes for user X". I think I understand how to add properties to each document to achieve these 3 objectives. (I am listing them here so if anyone is kind enough to answer, they will have better idea of what I am trying to do) My questions revolve around performance and security. Can I improve document security by having a separate index for each user, or is including the user's ID as a parameter in each search sufficient? Can I improve indexing speed and total throughput of the system by having a separate index for each user? My thinking is that having separate indexes would allow me to scale the system by having multiple index writers (perhaps even on different server instances) working at the same time, each on their own index. Any insight would be greatly appreciated. Regards, Nate

    Read the article

  • SQL Server 2008 R2

    - by kevchadders
    Hi all, I heard on the grapevine that Microsoft will be releasing SQL Server 2008 R2 within a year. Though I initially thought this was a patch for the just released 2008 version, I realised that it’s actually a completely different version that you would have to pay for. (Am I correct, if you had SQL Server 2008, would you have to pay again if you wanted to upgrade to 2008 R2?) If you’re already running SQL Server 2008, would you say it’s still worth the upgrade? Or does it depend on the size of your company and current setup. For what I’ve initially read, I do get the impression that this version would be more useful for the very high end hardware setup where you want to have very good scalability. With regard to programming, is there any extra enhancements/support in there which you’re aware of that will significantly help .NET Products/Web Development? Initially found a couple of links on it, but I was wondering if anyone had anymore info to share on subject as I couldn’t find nothing on SO about it? Thanks. New SQL Server R2 Microsoft Link on it. Microsoft SQL 2008 R2 EDIT: More information based on the Express Edition One very interesting thing about SQL Server 2008 R2 concerns the Express edition. Previous express versions of SQL Server Express had a database size limit of 4GB. With SQL Server Express 2008 R2, this has now been increased to 10GB !! This now makes the FREE express edition a much more viable option for small & medium sized applications that are relatively light on database requirements. Bear in mind, that this limit is per database, so if you coded your application cleverly enough to use a separate database for historical/archived data, you could squeeze even more out of it! For more information, see here: http://blogs.msdn.com/sqlexpress/archive/2010/04/21/database-size-limit-increased-to-10gb-in-sql-server-2008-r2-express.aspx

    Read the article

  • Was Visual Studio 2008 or 2010 written to use multi cores?

    - by Erx_VB.NExT.Coder
    basically i want to know if the visual studio IDE and/or compiler in 2010 was written to make use of a multi core environment (i understand we can target multi core environments in 08 and 10, but that is not my question). i am trying to decide on if i should get a higher clock dual core or a lower clock quad core, as i want to try and figure out which processor will give me the absolute best possible experience with Visual Studio 2010 (ide and background compiler). if they are running the most important section (background compiler and other ide tasks) in one core, then the core will get cut off quicker if running a quad core, esp if background compiler is the heaviest task, i would imagine this would b e difficult to seperate in more then one process, so even if it uses multi cores you might still be better off with going for a higher clock cpu if the majority of the processing is still bound to occur in one core (ie the most significant part of the VS environment). i am a vb programmer, they've made great performance improvements in beta 2, congrats, but i would love to be able to use VS seamlessly... anyone have any ideas? thanks, erx

    Read the article

  • Is it acceptable to wrap PHP library functions solely to change the names?

    - by Carson Myers
    I'm going to be starting a fairly large PHP application this summer, on which I'll be the sole developer (so I don't have any coding conventions to conform to aside from my own). PHP 5.3 is a decent language IMO, despite the stupid namespace token. But one thing that has always bothered me about it is the standard library and its lack of a naming convention. So I'm curious, would it be seriously bad practice to wrap some of the most common standard library functions in my own functions/classes to make the names a little better? I suppose it could also add or modify some functionality in some cases, although at the moment I don't have any examples (I figure I will find ways to make them OO or make them work a little differently while I am working). If you saw a PHP developer do this, would you think "Man, this is one shoddy developer?" Additionally, I don't know much (or anything) about if/how PHP is optimized, and I know that usually PHP performace doesn't matter. But would doing something like this have a noticeable impact on the performance of my application?

    Read the article

  • How to accommodate for the different screen resolution of iPhone 4?

    - by mystify
    This is a programming question! Read on before you vote to close! According to Apple, iPhone 4 has a new screen resolution: 3.5-inch (diagonal) widescreen Multi-Touch display 960-by-640-pixel resolution at 326 ppi This little detail affects our apps in a heavy way. Most of the demo apps on the net have one thing in common: They position views in the believe that the screen has a fixed size of 320 x 480 pixels. So what most -if not all- developers do is: They designed everything in such a way, that a touchable area is -for example- 50 x 50 pixels big. Just enough to tap it. Things have been positioned relative to the upper left, to reach a specific position on screen - let's say the center, or somewhere at the bottom. Edit: It seems Apple has integrated an switch that allows to tell if an app is highRes or not. Nice. When we develop high-resolution apps, probably they won't work on older devices. And if they did, they would suffer a lot from 4-times the size of any image, having to scale them down in memory. This is community wiki. Just add anything that you think is relevant to this huge problem (constant screen res was one of the main reasons why I didn't go for Android!!).

    Read the article

  • C++ Array vs vector

    - by blue_river
    when using C++ vector, time spent is 718 milliseconds, while when I use Array, time is almost 0 milliseconds. Why so much performance difference? int _tmain(int argc, _TCHAR* argv[]) { const int size = 10000; clock_t start, end; start = clock(); vector<int> v(size*size); for(int i = 0; i < size; i++) { for(int j = 0; j < size; j++) { v[i*size+j] = 1; } } end = clock(); cout<< (end - start) <<" milliseconds."<<endl; // 718 milliseconds int f = 0; start = clock(); int arr[size*size]; for(int i = 0; i < size; i++) { for(int j = 0; j < size; j++) { arr[i*size+j] = 1; } } end = clock(); cout<< ( end - start) <<" milliseconds."<<endl; // 0 milliseconds return 0; }

    Read the article

  • iPhone OpenGLES: Textures are consuming too much memory and the program crashes with signal "0"

    - by CustomAppsMan
    I am not sure what the problem is. My app is running fine on the simulator but when I try to run it on the iPhone it crashes during debugging or without debugging with signal "0". I am using the Texture2D.m and OpenGLES2DView.m from the examples provided by Apple. I profiled the app on the iPhone with Instruments using the Memory tracer from the Library and when the app died the final memory consumed was about 60Mb real and 90+Mb virtual. Is there some other problem or is the iPhone just killing the application because it has consumed too much memory? If you need any information please state it and I will try to provide it. I am creating thousands of textures at load time which is why the memory consumption is so high. Really cant do anything about reducing the number of pics being loaded. I was running before on just UIImage but it was giving me really low frame rates. I read on this site that I should use OpenGLES for higher frame rates. Also sub question is there any way not to use UIImage to load the png file and then use the Texture class provided to create the texture for OpenGLES functions to use it for drawing? Is there some function in OpenGLES which will create a texture straight from a png file?

    Read the article

  • Datastructure choices for highspeed and memory efficient detection of duplicate of strings

    - by Jonathan Holland
    I have a interesting problem that could be solved in a number of ways: I have a function that takes in a string. If this function has never seen this string before, it needs to perform some processing. If the function has seen the string before, it needs to skip processing. After a specified amount of time, the function should accept duplicate strings. This function may be called thousands of time per second, and the string data may be very large. This is a highly abstracted explanation of the real application, just trying to get down to the core concept for the purpose of the question. The function will need to store state in order to detect duplicates. It also will need to store an associated timestamp in order to expire duplicates. It does NOT need to store the strings, a unique hash of the string would be fine, providing there is no false positives due to collisions (Use a perfect hash?), and the hash function was performant enough. The naive implementation would be simply (in C#): Dictionary<String,DateTime> though in the interest of lowering memory footprint and potentially increasing performance I'm evaluating a custom data structures to handle this instead of a basic hashtable. So, given these constraints, what would you use? EDIT, some additional information that might change proposed implementations: 99% of the strings will not be duplicates. Almost all of the duplicates will arrive back to back, or nearly sequentially. In the real world, the function will be called from multiple worker threads, so state management will need to be synchronized.

    Read the article

  • WCF with MANY database connections

    - by Jorge Dominguez
    I'm working in the development of an ERP type .Net WinForms application consuming a WCF service. It's to be used by many small companies (in the range of 100-200). Database is SQL Server 2008 and the service will be hosted as a Windows service. Even thought there will be a single DB Server, our customer insists in having separate databases for each company. That is because of stability/support concerns (like DB being damaged or took offline for some reason thus affecting all clients). Concerns coming from previous experiences (not necessarily with same platform). With a single database, connections to the DB would be opened at service start up and pooling used, but, I'm not sure how connections could be managed in a multiple DB scenario: Could a connection to the corresponding DB be opened and closed for each service request? would performance be acceptable? If a connection is opened and maintained for each company accessing the system, what's the practical limit of opened connections (to different databases)? It would be very interesting to hear your opinions and suggestions for this situation. Tanks

    Read the article

  • What's the most trivial function that would benfit from being computed on a GPU?

    - by hanDerPeder
    Hi. I'm just starting out learning OpenCL. I'm trying to get a feel for what performance gains to expect when moving functions/algorithms to the GPU. The most basic kernel given in most tutorials is a kernel that takes two arrays of numbers and sums the value at the corresponding indexes and adds them to a third array, like so: __kernel void add(__global float *a, __global float *b, __global float *answer) { int gid = get_global_id(0); answer[gid] = a[gid] + b[gid]; } __kernel void sub(__global float* n, __global float* answer) { int gid = get_global_id(0); answer[gid] = n[gid] - 2; } __kernel void ranksort(__global const float *a, __global float *answer) { int gid = get_global_id(0); int gSize = get_global_size(0); int x = 0; for(int i = 0; i < gSize; i++){ if(a[gid] > a[i]) x++; } answer[x] = a[gid]; } I am assuming that you could never justify computing this on the GPU, the memory transfer would out weight the time it would take computing this on the CPU by magnitudes (I might be wrong about this, hence this question). What I am wondering is what would be the most trivial example where you would expect significant speedup when using a OpenCL kernel instead of the CPU?

    Read the article

  • What collection object is appropriate for fixed ordering of values?

    - by makerofthings7
    Scenario: I am tracking several performance counters and have a CounterDescription[] correlate to DataSnapshot[]... where CounterDescription[n] describes the data loaded within DataSnapshot[n]. I want to expose an easy to use API within C# that will allow for the easy and efficient expansion of the arrays. For example CounterDescription[0] = Humidity; DataSnapshot[0] = .9; CounterDescription[1] = Temp; DataSnapshot[1] = 63; My upload object is defined like this: Note how my intent is to correlate many Datasnapshots with a dattime reference, and using the offset of the data to refer to its meaning. This was determined to be the most efficient way to store the data on the back-end, and has now reflected itself into the following structure: public class myDataObject { [DataMember] public SortedDictionary<DateTime, float[]> Pages { get; set; } /// <summary> /// An array that identifies what each position in the array is supposed to be /// </summary> [DataMember] public CounterDescription[] Counters { get; set; } } I will need to expand each of these arrays (float[] and CounterDescription[] ), but whatever data already exists must stay in that relative offset. Which .NET objects support this? I think Array[] , LinkedList<t>, and List<t> Are able to keep the data fixed in the right locations. What do you think?

    Read the article

  • How do I become better in math, after being a programmer for several years.

    - by loxs
    I've had quite a weird career till now. First I graduated from a medical school. Then I went into marketing (pharmaceuticals). And then umm, after some time, I decided to go for my (till then) hobby and became a "professional" programmer. I've been quite successful at this ever since. I have quite some languages "under my belt". I earn not bad and I have been involved in the opensource community quite heavily. The thing is that I suck at math :). Well, not totally of course, as I get my work done. But I don't know how much I suck. And I don't know how to find out. Math has never really been of any priority during my middle/high school years. I only picked as little as I could afford, because I was always getting ready to go for Medicine. Of course I know the basics of algebra. Things like "normal" and square equations. Also the basics of geometry. But well, there are things that I have missed. And lately I am being fascinated by things like probability theory, infinity, chaos/order etc. But every time I try to learn something about these topics, I hit a wall of terminology, special symbols, and some special kind of thinking, that is quite like mine (a programmer), but also a lot different (and appears weird to me). So, what kinds of books would you recommend me? It's very hard to find something suitable. All that I find are either too easy (and boring) or totally impenetrable.

    Read the article

  • Slow T-SQL query, convert to LINQ to Object

    - by yimbot
    I have a T-SQL query which populates a DataSet from an MSSQL database. string qry = "SELECT * FROM EvnLog AS E WHERE TimeDate = (SELECT MAX(TimeDate) From EvnLog WHERE Code = E.Code) AND (Event = 8) AND (TimeDate BETWEEN '" + Start + "' AND '" + Finish + "')" The database is quite large and being the type of nested query it is, the Data Adapter can take a number of minutes to fill the DataSet. I have extended the DataAdapter's timeout value to 480 seconds to combat it, but the client still complains about slow performance and occassional timeouts. To combat this, I was considering executing a simpler query (ie. just taking the date range) and then populating a Generic List which I could then execute a Linq query against. The simple query executes very quickly which is great. However, I cannot seem to build a Linq query which generates the same results as the T-SQL query above. Is this the best solution to this problem? Can anyone provide tips on rewriting the above T-SQL into Linq? I have also considered using a DataView, but cannot seem to get the results from that either.

    Read the article

  • Small Open source and free java applications

    - by user1089770
    I am a Perl Developer who has just stepped into the Java world and I want to be very proficient in Java EE development. However, I have found that, being a Perl guy, I have very high MTI(Mother Tongue Influence). Indians will know what MTI is. For others, it can be defined as an influence of your "main" language whenever you "speak" your new language. In other words, our mind converts/translates what we want to "say" in the new language based on the "main" language, which can be quite absurd or meaning less in the "new" language. Coming back to the point, as a result of this MTI, many lines of my code has this Perl influence and it is absurd and useless for production. Hence, I'd like to see or better do some research on existing small to medium open source java applications, so that I will know how I can "speak" in the native way of the "new" language. Some examples in PHP will be : WordPress, Zend etc. So I'd really appreciate if you guys could suggest any such apps. Apps that have a great commercial value are highly preferred

    Read the article

  • How to control the "flow" of an ASP.NET MVC (3.0) web app that relies on Facebook membership, with Facebook C# SDK?

    - by Chad
    I want to totally remove the standard ASP.NET membership system and use Facebook only for my web app's membership. Note, this is not a Facebook canvas app question. Typically, in an ASP.NET app you have some key properties & methods to control the "flow" of an app. Notably: Request.IsAuthenticated, [Authorize] (in MVC apps), Membership.GetUser() and Roles.IsUserInRole(), among others. It looks like [FacebookAuthorize] is equivalent to [Authorize]. Also, there's some standard work I do across all controllers in my site. So I built a BaseController that overrides OnActionExecuting(FilterContext). Typically, I populate ViewData with the user's profile within this action. Would performance suffer if I made a call to fbApp.Get("me") in this action? I use the Facebook Javascript SDK to do registration, which is nice and easy. But that's all client-side, and I'm having a hard time wrapping my mind around when to use client-side facebook calls versus server-side. There will be a point when I need to grab the user's facebook uid and store it in a "profile" table along with a few other bits of data. That would probably be best handled on the return url from the registration plugin... correct? On a side note, what data is returned from fbApp.Get("me")?

    Read the article

  • C++: Copy contructor: Use Getters or access member vars directly?

    - by cbrulak
    Have a simple container class: public Container { public: Container() {} Container(const Container& cont) //option 1 { SetMyString(cont.GetMyString()); } //OR Container(const Container& cont) //option 2 { m_str1 = cont.m_str1; } public string GetMyString() { return m_str1;} public void SetMyString(string str) { m_str1 = str;} private: string m_str1; } So, would you recommend this method or accessing the member variables directly? In the example, all code is inline, but in our real code there is no inline code. Update (29 Sept 09): Some of these answers are well written however they seem to get missing the point of this question: this is simple contrived example to discuss using getters/setters vs variables initializer lists or private validator functions are not really part of this question. I'm wondering if either design will make the code easier to maintain and expand. Some ppl are focusing on the string in this example however it is just an example, imagine it is a different object instead. I'm not concerned about performance. we're not programming on the PDP-11

    Read the article

  • What data stucture should I use for BigInt class

    - by user1086004
    I would like to implement a BigInt class which will be able to handle really big numbers. I want only to add and multiply numbers, however the class should also handle negative numbers. I wanted to represent the number as a string, but there is a big overhead with converting string to int and back for adding. I want to implement addition as on the high school, add corresponding order and if the result is bigger than 10, add the carry to next order. Then I thought that it would be better to handle it as a array of unsigned long long int and keep the sign separated by bool. With this I'm afraid of size of the int, as C++ standard as far as I know guarantees only that int < float < double. Correct me if I'm wrong. So when I reach some number I should move in array forward and start adding number to the next array position. Is there any data structure that is appropriate or better for this? Thanks in advance.

    Read the article

  • File upload fails when user is authenticated. Using IIS7 Integrated mode.

    - by Nikkelmann
    These are the user identities my website tells me that it uses: Logged on: NT AUTHORITY\NETWORK SERVICE (Can not write any files at all) and Not logged on: WSW32\IUSR_77 (Can write files to any folder) I have a ASP.NET 4.0 website on a shared hosting IIS7 web server running in Integrated mode with 32-bit applications support enabled and MSSQL 2008. Using classic mode is not an option since I need to secure some static files and I use Routing. In my web.config file I have set the following: <system.webServer> <modules runAllManagedModulesForAllRequests="true" /> </system.webServer> My hosting company says that Impersonation is enabled by default on machine level, so this is not something I can change. I asked their support and they referred me to this article: http://www.codinghub.net/2010/08/differences-between-integrated-mode-and.html Citing this part: Different windows identity in Forms authentication When Forms Authentication is used by an application and anonymous access is allowed, the Integrated mode identity differs from the Classic mode identity in the following ways: * ServerVariables["LOGON_USER"] is filled. * Request.LogognUserIdentity uses the credentials of the [NT AUTHORITY\NETWORK SERVICE] account instead of the [NT AUTHORITY\INTERNET USER] account. This behavior occurs because authentication is performed in a single stage in Integrated mode. Conversely, in Classic mode, authentication occurs first with IIS 7.0 using anonymous access, and then with ASP.NET using Forms authentication. Thus, the result of the authentication is always a single user-- the Forms authentication user. AUTH_USER/LOGON_USER returns this same user because the Forms authentication user credentials are synchronized between IIS 7.0 and ASP.NET. A side effect is that LOGON_USER, HttpRequest.LogonUserIdentity, and impersonation no longer can access the Anonymous user credentials that IIS 7.0 would have authenticated by using Classic mode. How do I set up my website so that it can use the proper identity with the proper permissions? I've looked high and low for any answers regarding this specific problem, but found nil so far... I hope you can help!

    Read the article

  • How would I compare two Lists(Of <CustomClass>) in VB?

    - by Kumba
    I'm working on implementing the equality operator = for a custom class of mine. The class has one property, Value, which is itself a List(Of OtherClass), where OtherClass is yet another custom class in my project. I've already implemented the IComparer, IComparable, IEqualityComparer, and IEquatable interfaces, the operators =, <>, bool and not, and overriden Equals and GetHashCode for OtherClass. This should give me all the tools I need to compare these objects, and various tests comparing two singular instances of these objects so far checks out. However, I'm not sure how to approach this when they are in a List. I don't care about the list order. Given: Dim x As New List(Of OtherClass) From {New OtherClass("foo"), New OtherClass("bar"), New OtherClass("baz")} Dim y As New List(Of OtherClass) From {New OtherClass("baz"), New OtherClass("foo"), New OtherClass("bar")} Then (x = y).ToString should print out True. I need to compare the same (not distinct) set of objects in this list. The list shouldn't support dupes of OtherClass, but I'll have to figure out how to add that in later as an exception. Not interested in using LINQ. It looks nice, but in the few examples I've played with, adds a performance overhead in that bugs me. Loops are ugly, but they are fast :) A straight code answer is fine, but I'd like to understand the logic needed for such a comparison as well. I'm probably going to have to implement said logic more than a few times down the road.

    Read the article

  • complex regular expression task

    - by Don Don
    Hi, What regular expressions do I need to extract section title(s) in a text file? So, in the following sample text, I'd like to extract "Communication and Leadership" "1.Self-Knowledge" "2. Humility" "(3) Clear Thinking". Many thanks. Communication and Leadership True leaders understand that, rather than forcing their followers into a preconceived mold, their job is to motivate and organize followers to collectively accomplish goals that are in everyone's interests. The ability to communicate this to co-workers and followers is critical to the effectiveness of leadership. 1.Self-Knowledge Superior leaders are able to devote their skills and energies to leadership of a group because they have worked through personal issues to the point where they know themselves thoroughly. A high level of self-knowledge is a prerequisite to effective communication skills, because the things that you communicate as a leader are coming from within. 2. Humility This subversion of personal preference requires a certain level of humility. Although popular definitions of leaders do not always see them as humble, the most effective leaders actually are. This humility may not be expressed in self-effacement, but in a total commitment to the goals of the organization. Humility requires an understanding of one's own relative unimportance in comparison to larger systems. (3) Clear Thinking Clarity of thinking translates into clarity of communication. A leader whose goals or personal analysis is muddled will tend to deliver unclear or ambiguous directions to followers, leading to confusion and dissatisfaction. A leader with a clear mind who is not ambivalent about her purposes will communicate what needs to be done in a s traightforward and unmistakable manner.

    Read the article

< Previous Page | 669 670 671 672 673 674 675 676 677 678 679 680  | Next Page >