Search Results

Search found 1627 results on 66 pages for 'scenarios'.

Page 49/66 | < Previous Page | 45 46 47 48 49 50 51 52 53 54 55 56  | Next Page >

  • LINQ - Splitting up a string with maximum length, but not chopping words apart.

    - by Stacey
    I have a simple LINQ Extension Method... public static IEnumerable<string> SplitOnLength(this string input, int length) { int index = 0; while (index < input.Length) { if (index + length < input.Length) yield return input.Substring(index, length); else yield return input.Substring(index); index += length; } } This takes a string, and it chops it up into a collection of strings that do not exceed the given length. This works well - however I'd like to go further. It chops words in half. I don't need it to understand anything complicated, I just want it to be able to chop a string off 'early' if cutting it at the length would be cutting in the middle of text (basically anything that isn't whitespace). However I suck at LINQ, so I was wondering if anyone had an idea on how to go about this. I know what I am trying to do, but I'm not sure how to approach it. So let's say I have the following text. This is a sample block of text that I would pass through the string splitter. I call this method SplitOnLength(6) I would get the following. This i s a sa mple b lock o f text that I would pass t hrough the s tring splitt er. I would rather it be smart enough to stop and look more like .. This is a sample // bad example, since the single word exceeds maximum length, but the length would be larger numbers in real scenarios, closer to 200. Can anyone help me?

    Read the article

  • iPhone -trouble with a loading data from webservice into a tableview

    - by medampudi
    I am using a Window based application and then loading up my initial navigationview based controller. After loading it if the user is not registered/ does not have a credentials present then it takes the user to a login view controller . loginViewController *sampleView = [[loginViewController alloc] initWithNibName:@"loginViewController" bundle:nil]; [self.navigationController presentModalViewController:sampleView animated:YES]; [sampleView release]; then right after that i try to load the table with data that i get from a webservice using asiHTTP .. for this question lets say it takes 3 seconds time to get the data and then deserialize it . now... my question is it works out okey in the later runs as I store the username and password in a seure location... but in the first instance.... i am not able to get the data to laod to the tableview... I have tried a lot of things... 1. Initially the data fetch methods was in a diffrent methods.. so i thought that might be the problem as then moved it the same place as the tbleviewController(navigationController) 2. I event put in the Reload data at the end of the functionality for the data parsing and deserialization... nothing happens. 3. i did not understand the concept of @property and alll.... 4. The screen is black screen with nothing displayed on it for a good 5 seconds in the consecutive launches of the app.... so could we have something like a MBPorgressHUD implemented for the same. could any one please help for these scenarios and guidance as to what paths to take from here...

    Read the article

  • Determine asymmetric latencies in a network

    - by BeeOnRope
    Imagine you have many clustered servers, across many hosts, in a heterogeneous network environment, such that the connections between servers may have wildly varying latencies and bandwidth. You want to build a map of the connections between servers my transferring data between them. Of course, this map may become stale over time as the network topology changes - but lets ignore those complexities for now and assume the network is relatively static. Given the latencies between nodes in this host graph, calculating the bandwidth is a relative simply timing exercise. I'm having more difficulty with the latencies - however. To get round-trip time, it is a simple matter of timing a return-trip ping from the local host to a remote host - both timing events (start, stop) occur on the local host. What if I want one-way times under the assumption that the latency is not equal in both directions? Assuming that the clocks on the various hosts are not precisely synchronized (at least that their error is of the the same magnitude as the latencies involved) - how can I calculate the one-way latency? In a related question - is this asymmetric latency (where a link is quicker in direction than the other) common in practice? For what reasons/hardware configurations? Certainly I'm aware of asymmetric bandwidth scenarios, especially on last-mile consumer links such as DSL and Cable, but I'm not so sure about latency. Added: After considering the comment below, the second portion of the question is probably better off on serverfault.

    Read the article

  • When to use reinterpret_cast?

    - by HeretoLearn
    I am little confused with the applicability of reinterpret_cast vs static_cast. From what I have read the general rules are to use static cast when the types can be interpreted at compile time hence the word static. This is the cast the C++ compiler uses internally for implicit casts also. reinterpret_cast are applicable in two scenarios, convert integer types to pointer types and vice versa or to convert one pointer type to another. The general idea I get is this is unportable and should be avoided. Where I am a little confused is one usage which I need, I am calling C++ from C and the C code needs to hold on to the C++ object so basically it holds a void*. What cast should be used to convert between the void * and the Class type? I have seen usage of both static_cast and reinterpret_cast? Though from what I have been reading it appears static is better as the cast can happen at compile time? Though it says to use reinterpret_cast to convert from one pointer type to another?

    Read the article

  • Which way is preferred when doing asynchronous WCF calls?

    - by Mikael Svenson
    When invoking a WCF service asynchronous there seems to be two ways it can be done. 1. public void One() { WcfClient client = new WcfClient(); client.BegindoSearch("input", ResultOne, null); } private void ResultOne(IAsyncResult ar) { WcfClient client = new WcfClient(); string data = client.EnddoSearch(ar); } 2. public void Two() { WcfClient client = new WcfClient(); client.doSearchCompleted += TwoCompleted; client.doSearchAsync("input"); } void TwoCompleted(object sender, doSearchCompletedEventArgs e) { string data = e.Result; } And with the new Task<T> class we have an easy third way by wrapping the synchronous operation in a task. 3. public void Three() { WcfClient client = new WcfClient(); var task = Task<string>.Factory.StartNew(() => client.doSearch("input")); string data = task.Result; } They all give you the ability to execute other code while you wait for the result, but I think Task<T> gives better control on what you execute before or after the result is retrieved. Are there any advantages or disadvantages to using one over the other? Or scenarios where one way of doing it is more preferable?

    Read the article

  • calling constructor of the class in the destructor of the same class

    - by dicaprio
    Experts !! I know this question is one of the lousy one , but still I dared to open my mind , hoping I would learn from all. I was trying some examples as part of my routine and did this horrible thing, I called the constructor of the class from destructor of the same class. I don't really know if this is ever required in real programming , I cant think of any real time scenarios where we really need to call functions/CTOR in our destructor. Usually , destructor is meant for cleaning up. If my understanding is correct, why the compiler doesn't complain ? Is this because it is valid for some good reasons ? If so what are they ? I tried on Sun Forte, g++ and VC++ compiler and none of them complain about it. using namespace std; class test{ public: test(){ cout<<"CTOR"<<endl; } ~test() {cout<<"DTOR"<<endl; test(); }};

    Read the article

  • How can I parse_url in PHP when there is a URL in a string variable?

    - by Eric O
    I am admittedly a PHP newbie, so I need some help. I am creating a self-designed affiliate program for my site and have the option for an affiliate to add a SubID to their link for tracking. Without having control over what is entered, I have been testing different scenarios and found a bug when a full URL is entered (i.e. "http://example.com"). In my PHP I can grab the variable from the string no problem. My problem comes from when I get the referring URL and parse it (since I need to parse the referring URL to get the host mane for other uses). Code below: $refURL = getenv("HTTP_REFERER"); $parseRefURL = parse_url($refURL); WORKS when incoming link is (for example): http://example.com/?ref=REFERRER'S-ID&sid=www.test.com ERROR when incoming link is (notice the addition of "http://" after "sid="): http://example.com/?ref=REFERRER'S-ID&sid=http://www.test.com Here is the warning message: Warning: parse_url(/?ref=REFERRER'S-ID&sid=http://www.test.com) [function.parse-url]: Unable to parse url in /home4/'directory'/public_html/hosterdoodle/header.php on line 28 Any ideas on how to keep the parse-url function from being thrown off when someone may decide to place a URL in a variable? (I actually tested this problem down to the point that it will throw the error with as little as ":/" in the variable)

    Read the article

  • What rules govern cross-version compatibility for .NET applications and the C# language?

    - by John Feminella
    For some reason I've always had trouble remembering the backwards/forwards compatibility guarantees made by the framework, so I'd like to put that to bed forever. Suppose I have two assemblies, A and B. A is older and references .NET 2.0 assemblies; B references .NET 3.5 assemblies. I have the source for A and B, Ax and Bx, respectively; they are written in C# at the 2.0 and 3.0 language levels. (That is, Ax uses no features that were introduced later than C# 2.0; likewise Bx uses no features that were introduced later than 3.0.) I have two environments, C and D. C has the .NET 2.0 framework installed; D has the .NET 3.5 framework installed. Now, which of the following can/can't I do? Running: run A on C? run A on D? run B on C? run C on D? Compiling: compile Ax on C? compile Ax on D? compile Bx on C? compile Bx on D? Rewriting: rewrite Ax to use features from the C# 3 language level, and compile it on D, while having it still work on C? rewrite Bx to use features from the C# 4 language level on another environment E that has .NET 4, while having it still work on D?' Referencing from another assembly: reference B from A and have a client app on C use it? reference B from A and have a client app on D use it? reference A from B and have a client app on C use it? reference A from B and have a client app on D use it? More importantly, what rules govern the truth or falsity of these hypothetical scenarios?

    Read the article

  • Need help/guidance about creating a desktop application with gui

    - by Somebody still uses you MS-DOS
    I'm planning to do an Desktop application using Python, to learn some Desktop concepts. I'm going to use GTK or Qt, I still haven't decided which one. Fact is: I would like to create an application with the possibility to be called from command line, AND using a GUI. So it would be useful for cmd fans, and GUI users as well. It would be interesting to create a web interface too in the future, so it could be run in a server somewhere using an html interface created with a template language. I'm thinking about two approaches: - Creating a "model" with a simple interface which is called from a desktop/web implementation; - Creating a "model" with an html interface, and embeb a browser component so I could reuse all the code in both desktop/web scenarios. My question is: which exactly concepts are involved in this project? What advantages/disadvantages each approach has? Are they possible? By naming "interface", I'm planning to just do some interfaces.py files with def calls. Is this a bad approach? I would like to know some book recommendations, or resources to both options - or source code from projects which share the same GUI/cmd/web goals I'm after. Thanks in advance!

    Read the article

  • data ownership and performance

    - by Ami
    We're designing a new application and we ran into some architectural question when thinking about data ownership. we broke down the system into components, for example Customer and Order. each of this component/module is responsible for a specific business domain, i.e. Customer deals with CRUD of customers and business process centered around customers (Register a n new customer, block customer account, etc.). each module is the owner of a set of database tables, and only that module may access them. if another module needs data that is owned by another module, it retrieves it by requesting it from that module. so far so good, the question is how to deal with scenarios such as a report that needs to show all the customers and for each customer all his orders? in such a case we need to get all the customers from the Customer module, iterate over them and for each one get all the data from the Order module. performance won't be good...obviously it would be much better to have a stored proc join customers table and orders table, but that would also mean direct access to the data that is owned by another module, creating coupling and dependencies that we wish to avoid. this is a simplified example, we're dealing with an enterprise application with a lot of business entities and relationships, and my goal is to keep it clean and as loosely coupled as possible. I foresee in the future many changes to the data scheme, and possibly splitting the system into several completely separate systems. I wish to have a design that would allow this to be done in a relatively easy way. Thanks!

    Read the article

  • Using delegate Types vs methods

    - by Grant Sutcliffe
    I see increasing use of the delegate types offered in the System namespace (Action; Predicate etc). As these are delegates, my understanding is that they should be used where we have traditionally used delegates in the past (asynchronous calls; starting threads, event handling etc). Is it just preference or is it considered practice to use these delegate types in scenarios such as the below; rather than using calls to methods we have declared (or anonymous methods): public void MyMethod { Action<string> action = delegate(string userName { try { XmlDocument profile = DataHelper.GetProfile(userName); UpdateMember(profile); } catch (Exception exception) { if (_log.IsErrorEnabled) _log.ErrorFormat(exception.Message); throw (exception); } }; GetUsers().ForEach(action); } At first, I found the code less intuitive to follow than using declared or anonymous methods. I am starting to code this way, and wonder what the view are in this regard. The example above is all within a method. Is this delegate overuse.

    Read the article

  • :include and table aliasing

    - by dondo
    I'm suffering from a variant of the problem described here: ActiveRecord assigns table aliases for association joins fairly unpredictably. The first association to a given table keeps the table name. Further joins with associations to that table use aliases including the association names in the path... but it is common for app developers not to know about [other] joins at coding time. In my case I'm being bitten by a toxic mix of has_many and :include. Many tables in my schema have a state column, and the has_many wants to specify conditions on that column: has_many :foo, :conditions => {:state => 1}. However, since the state column appears in many tables, I disambiguate by explicitly specifying the table name: has_many :foo, :conditions => "this_table.state = 1". This has worked fine until now, when for efficiency I want to add an :include to preload a fairly deep tree of data. This causes the table to be aliased inconsistently in different code paths. My reading of the tickets referenced above is that this problem is not and will not be fixed in Rails 2.x. However, I don't see any way to apply the suggested workaround (to specify the aliased table name explicitly in the query). I'm happy to specify the table alias explicitly in the has_many statement, but I don't see any way to do so. As such, the workaround doesn't appear applicable to this situation (nor, I presume, in many 'named_scope' scenarios). Is there a viable workaround?

    Read the article

  • What happens to exinsting workspaces after upgrading to TFS 2010

    - by user351671
    Hi, I was looking for some insight about what happens to existing workspaces and files that are already checked-out on people, after an upgrade to TFS2010. Surprisingly enough I can not find any satisfactory information on this. (I am talking about upgrading on new hardware by the way. Fresh TFS instance, upgraded databases) I've checked TFS Installation guide, I searched through the web, all I could find is upgrade scenarios for the server side. Nobody even mentions what happens to source control clients. I've created a virtual machine to test the upgrade process, The upgrade was successful and all my files and workspaces exist in the new server too. The problem is: The new TFS installation has a new instanceID. When I redirected on the clients to the new server, the client seemed unable to match files and file states in the workspace with the ones on the new server. This makes me wonder if it will be possible to keep working after the production upgrade. As I mentioned above I can not find anything on this, it would be great if anyone could point me to some paper or blog post about this. Thanks in advance...

    Read the article

  • Determining if object is visible and clickable

    - by Alan Mendelevich
    I'm looking for ways to effectively determine if a control is actually visible and clickable. I mean beyond checking Visibility property of the object. I can check RenderSize and that would be [0,0] if any of the parent elements is collapsed. So this is simple too. I can also traverse up the visual tree and see if Opacity of all elements is set to 1. What I don't know how to check nicely are these scenarios: The object is obstructed by some other object. Obviously it's possible to use FindElementsInHostCoordinates() and do computations to find out how much these objects obstruct but this could be an overkill. I can also make a "screenshot" of the object in question and "screenshot" of the whole page and check if pixels where my object should be match the actual object pixels. That sounds like an overkill too. The object is obstructed by a transparent object that still "swallows" clicks (taps). The workarounds for the first problem could still fail in this scenario. Any better ideas? Do I miss something? Thanks!

    Read the article

  • Why can't we just use a hash of passphrase as the encryption key (and IV) with symmetric encryption algorithms?

    - by TX_
    Inspired by my previous question, now I have a very interesting idea: Do you really ever need to use Rfc2898DeriveBytes or similar classes to "securely derive" the encryption key and initialization vector from the passphrase string, or will just a simple hash of that string work equally well as a key/IV, when encrypting the data with symmetric algorithm (e.g. AES, DES, etc.)? I see tons of AES encryption code snippets, where Rfc2898DeriveBytes class is used to derive the encryption key and initialization vector (IV) from the password string. It is assumed that one should use a random salt and a shitload of iterations to derive secure enough key/IV for the encryption. While deriving bytes from password string using this method is quite useful in some scenarios, I think that's not applicable when encrypting data with symmetric algorithms! Here is why: using salt makes sense when there is a possibility to build precalculated rainbow tables, and when attacker gets his hands on hash he looks up the original password as a result. But... with symmetric data encryption, I think this is not required, as the hash of password string, or the encryption key, is never stored anywhere. So, if we just get the SHA1 hash of password, and use it as the encryption key/IV, isn't that going to be equally secure? What is the purpose of using Rfc2898DeriveBytes class to generate key/IV from password string (which is a very very performance-intensive operation), when we could just use a SHA1 (or any other) hash of that password? Hash would result in random bit distribution in a key (as opposed to using string bytes directly). And attacker would have to brute-force the whole range of key (e.g. if key length is 256bit he would have to try 2^256 combinations) anyway. So either I'm wrong in a dangerous way, or all those samples of AES encryption (including many upvoted answers here at SO), etc. that use Rfc2898DeriveBytes method to generate encryption key and IV are just wrong.

    Read the article

  • Is it okay to violate the principle that collection properties should be readonly for performance?

    - by uriDium
    I used FxCop to analyze some code I had written. I had exposed a collection via a setter. I understand why this is not good. Changing the backing store when I don't expect it is a very bad idea. Here is my problem though. I retrieve a list of business objects from a Data Access Object. I then need to add that collection to another business class and I was doing it with the setter method. The reason I did this was that it is going to be faster to make an assignment than to insert hundreds of thousands of objects one at a time to the collection again via another addElement method. Is it okay to have a getter for a collection in some scenarios? I though of rather having a constructor which takes a collection? I thought maybe I could pass the object in to the Dao and let the Dao populate it directly? Are there any other better ideas?

    Read the article

  • select dropdown option by value using jquery

    - by user1765862
    I have select following html structure <select id="mainMenu" data-mini="true" name="select-choice-min"> <option value="1">1</option> <option value="2">2</option> <option value="3">3</option> <option value="4">4</option> <option value="5">5</option> <option value="6">6</option> </select> on page load I want initially set option with value 5 so I tried (but nothing changed) $(document).ready(function () { $('#mainMenu option').eq(5).prop('selected', true); }) I'm using jquery 1.8.3 and jqueyr mobile since this is mobile website <script src="/Scripts/jquery-1.8.3.js"></script> <script src="/Scripts/jquery.mobile-1.4.2.js"></script> UPDATE: I just realize that every code from posted answers works (as well as mine), #mainMenu option value is set to desired value (5) but it's text doesnt change (in any of this scenarios). Is this normal and is there workarround for this. I tried $("mainMenu").text("5");

    Read the article

  • guarantee child records either in one table or another, but not both?

    - by user151841
    I have a table with two child tables. For each record in the parent table, I want one and only one record in one of the child tables -- not one in each, not none. How to I define that? Here's the backstory. Feel free to criticize this implementation, but please answer the question above, because this isn't the only time I've encountered it: I have a database that holds data pertaining to user surveys. It was originally designed with one authentication method for starting a survey. Since then, requirements have changed, and now there are two different ways someone could sign on to start a survey. Originally I captured the authentication token in a column in the survey table. Since requirements changed, there are three other bits of data that I want to capture in authentication. So for each record in the survey table, I'm either going to have one token, or a set of three. All four of these are of different types, so my thought was, instead of having four columns where either one is going to be null, or three are going to be null ( or even worse, a bad mashup of either of those scenarios ), I would have two child tables, one for holding the single authentication token, the other for holding the three. Problem is, I don't know offhand how to define that in DDL. I'm using MySQL, so maybe there's a feature that MySQL doesn't implement that lets me do this.

    Read the article

  • Designing for varying mobile device resolutions, i.e. iPhone 4 & iPhone 3G

    - by Josh
    As the design community moves to design applications & interfaces for mobile devices, a new problem has arisen: Varying Screen DPI's. Here's the situation: Touch: * iPhone 3G/S ~ 160 dpi * iPhone 4 ~ 300 dpi * iPad ~ 126 dpi * Android device @ 480p ~ 200 dpi Point / click: * Laptop @ 720p ~ 96 dpi * Desktop @ 720p ~ 72 dpi There is certainly a clear distinction between desktop and mobile so having two separate front-ends to the same app is logical, especially when considering one is "touch"-based and the other is "point/click"-based. The challenge lies in designing static graphical elements that will scale between, say, 160 dpi and 300+ dpi, and get consistent and clean design across zoom levels. Any thoughts on how to approach this? Here are some scenarios, but each has drawbacks as well: * Design a single set of assets (high resolution), then adjust zoom levels based on detected resolution / device o Drawbacks: Performance caused by code layering, varying device support of Zoom * Develop & optimize multiple variations of image and CSS assets, then hide / show each based on device o Drawbacks: Extra work in design & QA. Anyone have thoughts or experience on how to deal with this? We should certainly be looking at methods that use / support HTML5 and CSS3.

    Read the article

  • Using memset on structures in C++

    - by garry
    Hey guys. I am working on fixing older code for my job. It is currently written in C++. They converted static allocation to dynamic but didn't edit the memsets/memcmp/memcpy. This is my first programming internship so bare with my newbe-like question. The following code is in C, but I want to have it in C++ ( I read that malloc isn't good practice in C++). I have two scenarios: First, we have f created. Then you use &f in order to fill with zero. The second is a pointer *pf. I'm not sure how to set pf to all 0's like the previous example in C++. Could you just do pf = new foo instead of malloc and then call memset(pf, 0, sizeof(foo))? struct foo { ... } f; memset( &f, 0, sizeof(f) ); //or struct foo { ... } *pf; pf = (struct foo*) malloc( sizeof(*pf) ); memset( pf, 0, sizeof(*pf) );

    Read the article

  • Using a single texture image unit with multiple sampler uniforms

    - by bcrist
    I am writing a batching system which tracks currently bound textures in order to avoid unnecessary glBindTexture() calls. I'm not sure if I need to keep track of which textures have already been used by a particular batch so that if a texture is used twice, it will be bound to a different TIU for the second sampler which requires it. Is it acceptable for an OpenGL application to use the same texture image unit for multiple samplers within the same shader stage? What about samplers in different shader stages? For example: Fragment shader: ... uniform sampler2D samp1; uniform sampler2D samp2; void main() { ... } Main program: ... glActiveTexture(GL_TEXTURE0); glBindTexture(GL_TEXTURE_2D, tex_id); glUniform1i(samp1_location, 0); glUniform1i(samp2_location, 0); ... I don't see any reason why this shouldn't work, but what about if the shader program also included a vertex shader like this: Vertex shader: ... uniform sampler2D samp1; void main() { ... } In this case, OpenGL is supposed to treat both instances of samp1 as the same variable, and exposes a single location for them. Therefore, the same texture unit is being used in the vertex and fragment shaders. I have read that using the same texture in two different shader stages counts doubly against GL_MAX_COMBINED_TEXTURE_IMAGE_UNITS but this would seem to contradict that. In a quick test on my hardware (HD 6870), all of the following scenarios worked as expected: 1 TIU used for 2 sampler uniforms in same shader stage 1 TIU used for 1 sampler uniform which is used in 2 shader stages 1 TIU used for 2 sampler uniforms, each occurring in a different stage. However, I don't know if this is behavior that I should expect on all hardware/drivers, or if there are performance implications.

    Read the article

  • URL path changes between dev and published version

    - by Bob Horn
    I just got Scott Hanselman's chat app with SignalR working with ASP.NET MVC 4. After hours of configuration, trial and error, and getting different versions of Windows to talk to each other on my home network, it's all working except that I'm left with one issue that I'm not sure how to handle. This line of javascript has to change, depending on if I'm running the app through Visual Studio or the published (IIS) version: Works when running within VS: var connection = $.connection('echo'); Works with published version: var connection = $.connection('ChatWithSignalR/echo'); When I run within VS, the URL is: http://localhost:9145/ And the published version is: http://localhost/ChatWithSignalR If I don't change that line of code, and try to run the app within VS, using the javascript that has ChatWithSignalR in it, I get an error like this: Failed to load resource: the server responded with a status of 404 (Not Found) http://localhost:9145/ChatWithSignalR/echo/negotiate?_=1347809290826 What can I do so that I can use the same javascript code and have it work in both scenarios? var connection = $.connection('??????'); Note, this is in my Global.asax.cs: RouteTable.Routes.MapConnection<MyConnection>("echo", "echo/{*operation}");

    Read the article

  • How to handle product ratings in a database

    - by Mel
    Hello, I would like to know what is the best approach to storing product ratings in a database. I have in mind the following two (simplified, and assuming a MySQL db) scenarios: Scenario 1: Create two columns in the product table to store number of votes and the sum of all votes. Use columns to get an average on the product display page: products(productedID, productName, voteCount, voteSum) Pros: I will only need to access one table, and thus execute one query to display product data and ratings. Cons: Write operations will be executed in a table whose original purpose is only to furnish product data. Scenario 2: Create an additional table to store ratings. products(productID, productName) ratings(productID, voteCount, voteSum) Pros: Isolate ratings into a separate table, leaving the products table to furnish data on available products. Cons: I will have to execute two separate queries on product page requests (one for data and another for ratings). In terms of performance, which of the following two approaches is best: Allow users to execute an occasional write query to a table that will handle hundreds of read requests? Execute two queries at every product page, but isolate the write query into a separate table. I'm a novice to database development, and often find myself struggling with simple questions such as these. Many thanks,

    Read the article

  • Need help setting up a truststore's chain of authority (in Tomcat)

    - by codeinfo
    Lead in ... I'm not an expert, by far, in application security via SSL, but am trying to establish a test environment that includes all possible scenarios we may encounter in production. For this I have a tree of Certificate Authorities (CAs) that are the issuers of an assortment of test client certificates, and node/server certificates (complex test environment representing the various published web services and other applications we integrate with). The structure of these CAs are as follows: Root CA, which has signed/issued Sub CA1, Sub CA2, and Sub CA3. These subs have then signed/issued all certificates of those various nodes and clients in the environment. Now for the question .... In my application's truststore I would like to trust everything signed by Sub CA1, and Sub CA2, but not Sub CA3 (untrusted). Does this mean my truststore should (1) ONLY include Sub CA1 and Sub CA2, or (2) should it include Root CA, Sub CA1, and Sub CA2? I don't know what is the proper way to represent this trust chain in a truststore. In the future I would also like to add a Sub CA4 (also signed/issued by the Root CA), but add that to a Certificate Revocation List (CRL) for testing purposes. Ahead of time, thank you for any help concerning this. It's greatly appreciated.

    Read the article

  • Saving information in the IO System

    - by djTeller
    Hi Kernel Gurus, I need to write a kernel module that simulate a "multicaster" Using the /proc file system. Basically it need to support the following scenarios: 1) allow one write access to the /proc file and many read accesses to the /proc file. 2) The module should have a buffer of the contents last successful write. Each write should be matched by a read from all reader. Consider scenario 2, a writer wrote something and there are two readers (A and B), A read the content of the buffer, and then A tried to read again, in this case it should go into a wait_queue and wait for the next message, it should not get the same buffer again. I need to keep a map of all the pid's that already read the current buffer, and in case they try to read again and the buffer was not changed, they should be blocked until there is a new buffer. I'm trying to figure it there is a way i can save that info without a map. I heard there are some redundant fields inside the I/O system the I can use to flag a process if it already read the current buffer. Can someone give me a tip where should i look for that field ? how can i save info on the current process without keeping a "map" of pid's and buffers ? Thanks!

    Read the article

< Previous Page | 45 46 47 48 49 50 51 52 53 54 55 56  | Next Page >