Search Results

Search found 45129 results on 1806 pages for 'public key'.

Page 174/1806 | < Previous Page | 170 171 172 173 174 175 176 177 178 179 180 181  | Next Page >

  • What are the repercussions of not checking existing data when adding a foreign key?

    - by scottm
    I've inherited a database that doesn't exactly strive for data integrity. I am trying to add some foreign keys to change that, but there is data in some tables that doesn't fit the constraints. Most likely, the data won't be used again so I want to know what problems I might face by leaving it there. The other option I see is to move it into some kind of table without referential constraints, just for historical purposes. So, what are the repercussions of not checking existing data? If I create a foreign key constraint on a table and don't check existing data, will all new data inserted into the table be enforced?

    Read the article

  • Control XML serialization of Dictionary<K, T>

    - by Luca
    I'm investigating about XML serialization, and since I use lot of dictionary, I would like to serialize them as well. I found the following solution for that (I'm quite proud of it! :) ). [XmlInclude(typeof(Foo))] public class XmlDictionary<TKey, TValue> { /// <summary> /// Key/value pair. /// </summary> public struct DictionaryItem { /// <summary> /// Dictionary item key. /// </summary> public TKey Key; /// <summary> /// Dictionary item value. /// </summary> public TValue Value; } /// <summary> /// Dictionary items. /// </summary> public DictionaryItem[] Items { get { List<DictionaryItem> items = new List<DictionaryItem>(ItemsDictionary.Count); foreach (KeyValuePair<TKey, TValue> pair in ItemsDictionary) { DictionaryItem item; item.Key = pair.Key; item.Value = pair.Value; items.Add(item); } return (items.ToArray()); } set { ItemsDictionary = new Dictionary<TKey,TValue>(); foreach (DictionaryItem item in value) ItemsDictionary.Add(item.Key, item.Value); } } /// <summary> /// Indexer base on dictionary key. /// </summary> /// <param name="key"></param> /// <returns></returns> public TValue this[TKey key] { get { return (ItemsDictionary[key]); } set { Debug.Assert(value != null); ItemsDictionary[key] = value; } } /// <summary> /// Delegate for get key from a dictionary value. /// </summary> /// <param name="value"></param> /// <returns></returns> public delegate TKey GetItemKeyDelegate(TValue value); /// <summary> /// Add a range of values automatically determining the associated keys. /// </summary> /// <param name="values"></param> /// <param name="keygen"></param> public void AddRange(IEnumerable<TValue> values, GetItemKeyDelegate keygen) { foreach (TValue v in values) ItemsDictionary.Add(keygen(v), v); } /// <summary> /// Items dictionary. /// </summary> [XmlIgnore] public Dictionary<TKey, TValue> ItemsDictionary = new Dictionary<TKey,TValue>(); } The classes deriving from this class are serialized in the following way: <FooDictionary xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema"> <Items> <DictionaryItemOfInt32Foo> <Key/> <Value/> </DictionaryItemOfInt32XmlProcess> <Items> This give me a good solution, but: How can I control the name of the element DictionaryItemOfInt32Foo What happens if I define a Dictionary<FooInt32, Int32> and I have the classes Foo and FooInt32? Is it possible to optimize the class above? THank you very much!

    Read the article

  • make a tree based on the key of each element in list.

    - by cocobear
    >>> s [{'000000': [['apple', 'pear']]}, {'100000': ['good', 'bad']}, {'200000': ['yeah', 'ogg']}, {'300000': [['foo', 'foo']]}, {'310000': [['#'], ['#']]}, {'320000': ['$', ['1']]}, {'321000': [['abc', 'abc']]}, {'322000': [['#'], ['#']]}, {'400000': [['yeah', 'baby']]}] >>> for i in s: ... print i ... {'000000': [['apple', 'pear']]} {'100000': ['good', 'bad']} {'200000': ['yeah', 'ogg']} {'300000': [['foo', 'foo']]} {'310000': [['#'], ['#']]} {'320000': ['$', ['1']]} {'321000': [['abc', 'abc']]} {'322000': [['#'], ['#']]} {'400000': [['yeah', 'baby']]} i want to make a tree based on the key of each element in list. result in logic will be: {'000000': [['apple', 'pear']]} {'100000': ['good', 'bad']} {'200000': ['yeah', 'ogg']} {'300000': [['foo', 'foo']]} {'310000': [['#'], ['#']]} {'320000': ['$', ['1']]} {'321000': [['abc', 'abc']]} {'322000': [['#'], ['#']]} {'400000': [['yeah', 'baby']]} perhaps a nested list can implement this or I need a tree type?

    Read the article

  • Control XML serialization of generic types

    - by Luca
    I'm investigating about XML serialization, and since I use lot of dictionary, I would like to serialize them as well. I found the following solution for that (I'm quite proud of it! :) ). [XmlInclude(typeof(Foo))] public class XmlDictionary<TKey, TValue> { /// <summary> /// Key/value pair. /// </summary> public struct DictionaryItem { /// <summary> /// Dictionary item key. /// </summary> public TKey Key; /// <summary> /// Dictionary item value. /// </summary> public TValue Value; } /// <summary> /// Dictionary items. /// </summary> public DictionaryItem[] Items { get { List<DictionaryItem> items = new List<DictionaryItem>(ItemsDictionary.Count); foreach (KeyValuePair<TKey, TValue> pair in ItemsDictionary) { DictionaryItem item; item.Key = pair.Key; item.Value = pair.Value; items.Add(item); } return (items.ToArray()); } set { ItemsDictionary = new Dictionary<TKey,TValue>(); foreach (DictionaryItem item in value) ItemsDictionary.Add(item.Key, item.Value); } } /// <summary> /// Indexer base on dictionary key. /// </summary> /// <param name="key"></param> /// <returns></returns> public TValue this[TKey key] { get { return (ItemsDictionary[key]); } set { Debug.Assert(value != null); ItemsDictionary[key] = value; } } /// <summary> /// Delegate for get key from a dictionary value. /// </summary> /// <param name="value"></param> /// <returns></returns> public delegate TKey GetItemKeyDelegate(TValue value); /// <summary> /// Add a range of values automatically determining the associated keys. /// </summary> /// <param name="values"></param> /// <param name="keygen"></param> public void AddRange(IEnumerable<TValue> values, GetItemKeyDelegate keygen) { foreach (TValue v in values) ItemsDictionary.Add(keygen(v), v); } /// <summary> /// Items dictionary. /// </summary> [XmlIgnore] public Dictionary<TKey, TValue> ItemsDictionary = new Dictionary<TKey,TValue>(); } The classes deriving from this class are serialized in the following way: <XmlProcessList xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema"> <Items> <DictionaryItemOfInt32Foo> <Key/> <Value/> </DictionaryItemOfInt32XmlProcess> <Items> This give me a good solution, but: How can I control the name of the element DictionaryItemOfInt32Foo What happens if I define a Dictionary<FooInt32, Int32> and I have the classes Foo and FooInt32? Is it possible to optimize the class above? THank you very much!

    Read the article

  • Passing the "enter key" event to flash player in an ATL Window?

    - by Adam Naylor
    I have a Flash player (flash9.ocx) embedded in an ATL window and have coded functionality into the swf to respond to the return/enter key being pressed. Works fine from the standalone swf player but as soon as its played from within my embedded player it doesn't execute. It's as if my window is getting in the way somehow? Is there any way to pass the keypress through to the player? FYI, there isn't anything to weird in place on the form. Thanks!

    Read the article

  • NSSortdescriptor, finding the path to the key I wish to use for sorting.

    - by RickiG
    Hi I am using an NSSortdescriptor to sort a collection of NSArrays, I then came across a case where the particular NSArray to be sorted contains an NSDictionary who contains an NSDictionary. I would like to sort from the string paired with a key in the last dictionary. This is how I would reference the string: NSDictionary *productDict = [MyArray objectAtIndex:index]; NSString *dealerName = [[productDict objectForKey:@"dealer"] objectForKey:@"name"]; How would I use the dealerName in my NSSortdescriptor to sort the array? NSSortDescriptor * sortDesc = [[NSSortDescriptor alloc] initWithKey:/* ? */ ascending:YES]; sortedDealerArray = [value sortedArrayUsingDescriptors:sortDesc]; [sortDesc release]; Hope someone could help me a bit with how I go about sorting according to keys inside objects inside other objects:) Thank you.

    Read the article

  • What are the key steps a Developer needs to go through to become a good Project Manager?

    - by adpd
    I am a full-time Developer who aspires to be a great technical Project Manager. What are the key steps that you think I need to go through to achieve that aspiration? I am interested in professional qualifications (e.g. PRINCE2, etc.), training courses, personality traits, experiences, tips and techniques that would improve my chances of being the best I can be. I would be interested in hearing if anybody has done this, and whether they are happy with their change from development or if they wished they had stuck to a development (and the reasons behind this decision).

    Read the article

  • In C#: How to declare a generic Dictionary whose key and value types have a common constraint type?

    - by Marcel
    Hi all, I want to declare a dictionary that stores typed IEnumerable's of a specific type, with that exact type as key, like so: private IDictionary<T, IEnumerable<T>> _dataOfType where T: BaseClass; //does not compile! The concrete classes I want to store, all derive from BaseClass, therefore the idea to use it as constraint. The compiler complains that it expects a semicolon after the member name. If it would work, I would expect this would make the later retrieval from the dictionary simple like: IEnumerable<ConcreteData> concreteData; _sitesOfType.TryGetValue(typeof(ConcreteType), out concreteData); How to define such a dictionary?

    Read the article

  • Is it the filename or the whole URL used as a key in browser caches?

    - by Richard Turner
    It's common to want browsers to cache resources - JavaScript, CSS, images, etc. until there is a new version available, and then ensure that the browser fetches and caches the new version instead. One solution is to embed a version number in the resource's filename, but will placing the resources to be managed in this way in a directory with a revision number in it do the same thing? Is the whole URL to the file used as a key in the browser's cache, or is it just the filename itself and some meta-data? If my code changes from fetching '/r20/example.js' to '/r21/example.js', can I be sure that revision 20 of example.js was cached, but now revision 21 has been fetched instead and it is now cached?

    Read the article

  • does class/object models have a out-of-the-box equivalent to a database foreign key constraint

    - by Greg
    Hi, Does does class/object models have a out-of-the-box equivalent to a database foreign key constraint? Assume the language is C# please. That is say Class A has a field that references Class B and vica-versa. If I have Object A & B (instantiated from these classes) what happens if I delete Object B? Does it auto-delete or throw a constraint issue if it still exists in Object A as a reference? That is, for this scenario is there a way to ensure when a Object A is delete that either (a) object B is delete like a cascade delete, or (b) a constraint exception is thrown as the expectation is that the reference in Class B should be non-null?

    Read the article

  • Java how does Key Event Handling Mechanism(KeyListeners notified) work ?

    - by Carbonizer
    How does application/JVM know which classes if implemented key handling interfaces ? Does it use java Reflections or does it check all the classes for methods ? How can a application or executing JVM understanding to deliver the user event or call the specific methods on a class that implemented the keylistener interface. Does it look at all the classes if those methods are implemented or how does it know which classes implmented keylistener interface ? If you dont implement the keylistener Interface for a class but still implmentation all its methods. Do the class still process the user event occurred ?

    Read the article

  • How to merge arrays with same key and different value in PHP?

    - by Martin
    Hi guys, I have arrays similarly to these: 0 => Array ( [0] => Finance / Shopping / Food, [1] => 47 ) 1 => Array ( [0] => Finance / Shopping / Food, [1] => 25 ) 2 => Array ( [0] => Finance / Shopping / Electronic, [1] => 190 ) I need to create one array with [0] as a key and [1] as value. The tricky part is that if the [0] is same it add [1] to existing value. So the result I want is: array ([Finance / Shopping / Food]=> 72, [Finance / Shopping / Electronic] => 190); thanks

    Read the article

  • How can I ask Hibernate to create an index on a foreign key (JoinColumn)?

    - by Kent Chen
    Hi, This is my model. class User{ @CollectionOfElements @JoinTable(name = "user_type", joinColumns = @JoinColumn(name = "user_id")) @Column(name = "type", nullable = false) private List<String> types = new ArrayList<String>(); } You can imagin there would be a table called "user_type", which has two columns, one is "user_id", the other is "type". And when I use hbm2ddl to generate the ddls, I can have this table, along with the foreign key constraint on "user_id". However, there is no index of this for this column. And I need this index. How can I let hibernate to generate this index for me? Thank you!

    Read the article

  • Best way to have unique key over 500M varchar(255) records in mysql/innodb?

    - by taw
    I have url column with unique key over it - but its performance on updates is absolutely atrocious. I suspect that's because the index doesn't all fit in memory. So I was thinking, how about adding a column of md5(url) with 16 bytes of binary data and unique-keying that instead. What would be the best datatype for that? I'd love to be able to just see 32-character hex hash, while mysql would convert it to/from 16 binary bytes and index that, as programs using the database might have some troubles with arbitrary binary data that I'd rather avoid if possible (also I'm a bit afraid that mysql might get some strange ideas about character sets and for example overalocating storage for that by 3:1 because it thinks it might need utf8, how do I avoid that for cure?).

    Read the article

  • What key concepts and nuances in C++ you know?

    - by Narek
    What kind of key points and concepts should a person know from C++ (and from programming in general) to be considered that he/she possesses C++ (and programming, in general) skills good. e.g. //Even if sizeof(T) may not be equal to 1, this code steps over array elements T v[]; for(T *p = v ; *p != 0 ; p++) cout<<*p<<endl; P.S. I hope by exchanging this info we will help each other to know C++ and programing thechnics better by doing explicit our notion that we got from practice.

    Read the article

  • CouchDB emit with lookup key that is array, such that order of array elements are ignored.

    - by MatternPatching
    When indexing a couchdb view, you can emit an array as the key such as: emit(["one", "two", "three"], doc); I appreciate the fact that when searching the view, the order is important, but sometimes I would like the view to ignore it. I have thought of a couple of options. 1. By convention, just emit the contents in alphabetical order, and ensure that looking up uses the same convention. 2. Somehow hash in a manner that disregards the order, and emit/search based on that hash. (This is fairly easy, if you simply hash each one individually, "sum" the hashes, then mod.) Note: I'm sure this may be covered somewhere in the authoritative guide, but I was unsuccessful in finding it.

    Read the article

  • Capturing and Transforming ASP.NET Output with Response.Filter

    - by Rick Strahl
    During one of my Handlers and Modules session at DevConnections this week one of the attendees asked a question that I didn’t have an immediate answer for. Basically he wanted to capture response output completely and then apply some filtering to the output – effectively injecting some additional content into the page AFTER the page had completely rendered. Specifically the output should be captured from anywhere – not just a page and have this code injected into the page. Some time ago I posted some code that allows you to capture ASP.NET Page output by overriding the Render() method, capturing the HtmlTextWriter() and reading its content, modifying the rendered data as text then writing it back out. I’ve actually used this approach on a few occasions and it works fine for ASP.NET pages. But this obviously won’t work outside of the Page class environment and it’s not really generic – you have to create a custom page class in order to handle the output capture. [updated 11/16/2009 – updated ResponseFilterStream implementation and a few additional notes based on comments] Enter Response.Filter However, ASP.NET includes a Response.Filter which can be used – well to filter output. Basically Response.Filter is a stream through which the OutputStream is piped back to the Web Server (indirectly). As content is written into the Response object, the filter stream receives the appropriate Stream commands like Write, Flush and Close as well as read operations although for a Response.Filter that’s uncommon to be hit. The Response.Filter can be programmatically replaced at runtime which allows you to effectively intercept all output generation that runs through ASP.NET. A common Example: Dynamic GZip Encoding A rather common use of Response.Filter hooking up code based, dynamic  GZip compression for requests which is dead simple by applying a GZipStream (or DeflateStream) to Response.Filter. The following generic routines can be used very easily to detect GZip capability of the client and compress response output with a single line of code and a couple of library helper routines: WebUtils.GZipEncodePage(); which is handled with a few lines of reusable code and a couple of static helper methods: /// <summary> ///Sets up the current page or handler to use GZip through a Response.Filter ///IMPORTANT:  ///You have to call this method before any output is generated! /// </summary> public static void GZipEncodePage() {     HttpResponse Response = HttpContext.Current.Response;     if(IsGZipSupported())     {         stringAcceptEncoding = HttpContext.Current.Request.Headers["Accept-Encoding"];         if(AcceptEncoding.Contains("deflate"))         {             Response.Filter = newSystem.IO.Compression.DeflateStream(Response.Filter,                                        System.IO.Compression.CompressionMode.Compress);             Response.AppendHeader("Content-Encoding", "deflate");         }         else        {             Response.Filter = newSystem.IO.Compression.GZipStream(Response.Filter,                                       System.IO.Compression.CompressionMode.Compress);             Response.AppendHeader("Content-Encoding", "gzip");                            }     }     // Allow proxy servers to cache encoded and unencoded versions separately    Response.AppendHeader("Vary", "Content-Encoding"); } /// <summary> /// Determines if GZip is supported /// </summary> /// <returns></returns> public static bool IsGZipSupported() { string AcceptEncoding = HttpContext.Current.Request.Headers["Accept-Encoding"]; if (!string.IsNullOrEmpty(AcceptEncoding) && (AcceptEncoding.Contains("gzip") || AcceptEncoding.Contains("deflate"))) return true; return false; } GZipStream and DeflateStream are streams that are assigned to Response.Filter and by doing so apply the appropriate compression on the active Response. Response.Filter content is chunked So to implement a Response.Filter effectively requires only that you implement a custom stream and handle the Write() method to capture Response output as it’s written. At first blush this seems very simple – you capture the output in Write, transform it and write out the transformed content in one pass. And that indeed works for small amounts of content. But you see, the problem is that output is written in small buffer chunks (a little less than 16k it appears) rather than just a single Write() statement into the stream, which makes perfect sense for ASP.NET to stream data back to IIS in smaller chunks to minimize memory usage en route. Unfortunately this also makes it a more difficult to implement any filtering routines since you don’t directly get access to all of the response content which is problematic especially if those filtering routines require you to look at the ENTIRE response in order to transform or capture the output as is needed for the solution the gentleman in my session asked for. So in order to address this a slightly different approach is required that basically captures all the Write() buffers passed into a cached stream and then making the stream available only when it’s complete and ready to be flushed. As I was thinking about the implementation I also started thinking about the few instances when I’ve used Response.Filter implementations. Each time I had to create a new Stream subclass and create my custom functionality but in the end each implementation did the same thing – capturing output and transforming it. I thought there should be an easier way to do this by creating a re-usable Stream class that can handle stream transformations that are common to Response.Filter implementations. Creating a semi-generic Response Filter Stream Class What I ended up with is a ResponseFilterStream class that provides a handful of Events that allow you to capture and/or transform Response content. The class implements a subclass of Stream and then overrides Write() and Flush() to handle capturing and transformation operations. By exposing events it’s easy to hook up capture or transformation operations via single focused methods. ResponseFilterStream exposes the following events: CaptureStream, CaptureString Captures the output only and provides either a MemoryStream or String with the final page output. Capture is hooked to the Flush() operation of the stream. TransformStream, TransformString Allows you to transform the complete response output with events that receive a MemoryStream or String respectively and can you modify the output then return it back as a return value. The transformed output is then written back out in a single chunk to the response output stream. These events capture all output internally first then write the entire buffer into the response. TransformWrite, TransformWriteString Allows you to transform the Response data as it is written in its original chunk size in the Stream’s Write() method. Unlike TransformStream/TransformString which operate on the complete output, these events only see the current chunk of data written. This is more efficient as there’s no caching involved, but can cause problems due to searched content splitting over multiple chunks. Using this implementation, creating a custom Response.Filter transformation becomes as simple as the following code. To hook up the Response.Filter using the MemoryStream version event: ResponseFilterStream filter = new ResponseFilterStream(Response.Filter); filter.TransformStream += filter_TransformStream; Response.Filter = filter; and the event handler to do the transformation: MemoryStream filter_TransformStream(MemoryStream ms) { Encoding encoding = HttpContext.Current.Response.ContentEncoding; string output = encoding.GetString(ms.ToArray()); output = FixPaths(output); ms = new MemoryStream(output.Length); byte[] buffer = encoding.GetBytes(output); ms.Write(buffer,0,buffer.Length); return ms; } private string FixPaths(string output) { string path = HttpContext.Current.Request.ApplicationPath; // override root path wonkiness if (path == "/") path = ""; output = output.Replace("\"~/", "\"" + path + "/").Replace("'~/", "'" + path + "/"); return output; } The idea of the event handler is that you can do whatever you want to the stream and return back a stream – either the same one that’s been modified or a brand new one – which is then sent back to as the final response. The above code can be simplified even more by using the string version events which handle the stream to string conversions for you: ResponseFilterStream filter = new ResponseFilterStream(Response.Filter); filter.TransformString += filter_TransformString; Response.Filter = filter; and the event handler to do the transformation calling the same FixPaths method shown above: string filter_TransformString(string output) { return FixPaths(output); } The events for capturing output and capturing and transforming chunks work in a very similar way. By using events to handle the transformations ResponseFilterStream becomes a reusable component and we don’t have to create a new stream class or subclass an existing Stream based classed. By the way, the example used here is kind of a cool trick which transforms “~/” expressions inside of the final generated HTML output – even in plain HTML controls not HTML controls – and transforms them into the appropriate application relative path in the same way that ResolveUrl would do. So you can write plain old HTML like this: <a href=”~/default.aspx”>Home</a>  and have it turned into: <a href=”/myVirtual/default.aspx”>Home</a>  without having to use an ASP.NET control like Hyperlink or Image or having to constantly use: <img src=”<%= ResolveUrl(“~/images/home.gif”) %>” /> in MVC applications (which frankly is one of the most annoying things about MVC especially given the path hell that extension-less and endpoint-less URLs impose). I can’t take credit for this idea. While discussing the Response.Filter issues on Twitter a hint from Dylan Beattie who pointed me at one of his examples which does something similar. I thought the idea was cool enough to use an example for future demos of Response.Filter functionality in ASP.NET next I time I do the Modules and Handlers talk (which was great fun BTW). How practical this is is debatable however since there’s definitely some overhead to using a Response.Filter in general and especially on one that caches the output and the re-writes it later. Make sure to test for performance anytime you use Response.Filter hookup and make sure it' doesn’t end up killing perf on you. You’ve been warned :-}. How does ResponseFilterStream work? The big win of this implementation IMHO is that it’s a reusable  component – so for implementation there’s no new class, no subclassing – you simply attach to an event to implement an event handler method with a straight forward signature to retrieve the stream or string you’re interested in. The implementation is based on a subclass of Stream as is required in order to handle the Response.Filter requirements. What’s different than other implementations I’ve seen in various places is that it supports capturing output as a whole to allow retrieving the full response output for capture or modification. The exception are the TransformWrite and TransformWrite events which operate only active chunk of data written by the Response. For captured output, the Write() method captures output into an internal MemoryStream that is cached until writing is complete. So Write() is called when ASP.NET writes to the Response stream, but the filter doesn’t pass on the Write immediately to the filter’s internal stream. The data is cached and only when the Flush() method is called to finalize the Stream’s output do we actually send the cached stream off for transformation (if the events are hooked up) and THEN finally write out the returned content in one big chunk. Here’s the implementation of ResponseFilterStream: /// <summary> /// A semi-generic Stream implementation for Response.Filter with /// an event interface for handling Content transformations via /// Stream or String. /// <remarks> /// Use with care for large output as this implementation copies /// the output into a memory stream and so increases memory usage. /// </remarks> /// </summary> public class ResponseFilterStream : Stream { /// <summary> /// The original stream /// </summary> Stream _stream; /// <summary> /// Current position in the original stream /// </summary> long _position; /// <summary> /// Stream that original content is read into /// and then passed to TransformStream function /// </summary> MemoryStream _cacheStream = new MemoryStream(5000); /// <summary> /// Internal pointer that that keeps track of the size /// of the cacheStream /// </summary> int _cachePointer = 0; /// <summary> /// /// </summary> /// <param name="responseStream"></param> public ResponseFilterStream(Stream responseStream) { _stream = responseStream; } /// <summary> /// Determines whether the stream is captured /// </summary> private bool IsCaptured { get { if (CaptureStream != null || CaptureString != null || TransformStream != null || TransformString != null) return true; return false; } } /// <summary> /// Determines whether the Write method is outputting data immediately /// or delaying output until Flush() is fired. /// </summary> private bool IsOutputDelayed { get { if (TransformStream != null || TransformString != null) return true; return false; } } /// <summary> /// Event that captures Response output and makes it available /// as a MemoryStream instance. Output is captured but won't /// affect Response output. /// </summary> public event Action<MemoryStream> CaptureStream; /// <summary> /// Event that captures Response output and makes it available /// as a string. Output is captured but won't affect Response output. /// </summary> public event Action<string> CaptureString; /// <summary> /// Event that allows you transform the stream as each chunk of /// the output is written in the Write() operation of the stream. /// This means that that it's possible/likely that the input /// buffer will not contain the full response output but only /// one of potentially many chunks. /// /// This event is called as part of the filter stream's Write() /// operation. /// </summary> public event Func<byte[], byte[]> TransformWrite; /// <summary> /// Event that allows you to transform the response stream as /// each chunk of bytep[] output is written during the stream's write /// operation. This means it's possibly/likely that the string /// passed to the handler only contains a portion of the full /// output. Typical buffer chunks are around 16k a piece. /// /// This event is called as part of the stream's Write operation. /// </summary> public event Func<string, string> TransformWriteString; /// <summary> /// This event allows capturing and transformation of the entire /// output stream by caching all write operations and delaying final /// response output until Flush() is called on the stream. /// </summary> public event Func<MemoryStream, MemoryStream> TransformStream; /// <summary> /// Event that can be hooked up to handle Response.Filter /// Transformation. Passed a string that you can modify and /// return back as a return value. The modified content /// will become the final output. /// </summary> public event Func<string, string> TransformString; protected virtual void OnCaptureStream(MemoryStream ms) { if (CaptureStream != null) CaptureStream(ms); } private void OnCaptureStringInternal(MemoryStream ms) { if (CaptureString != null) { string content = HttpContext.Current.Response.ContentEncoding.GetString(ms.ToArray()); OnCaptureString(content); } } protected virtual void OnCaptureString(string output) { if (CaptureString != null) CaptureString(output); } protected virtual byte[] OnTransformWrite(byte[] buffer) { if (TransformWrite != null) return TransformWrite(buffer); return buffer; } private byte[] OnTransformWriteStringInternal(byte[] buffer) { Encoding encoding = HttpContext.Current.Response.ContentEncoding; string output = OnTransformWriteString(encoding.GetString(buffer)); return encoding.GetBytes(output); } private string OnTransformWriteString(string value) { if (TransformWriteString != null) return TransformWriteString(value); return value; } protected virtual MemoryStream OnTransformCompleteStream(MemoryStream ms) { if (TransformStream != null) return TransformStream(ms); return ms; } /// <summary> /// Allows transforming of strings /// /// Note this handler is internal and not meant to be overridden /// as the TransformString Event has to be hooked up in order /// for this handler to even fire to avoid the overhead of string /// conversion on every pass through. /// </summary> /// <param name="responseText"></param> /// <returns></returns> private string OnTransformCompleteString(string responseText) { if (TransformString != null) TransformString(responseText); return responseText; } /// <summary> /// Wrapper method form OnTransformString that handles /// stream to string and vice versa conversions /// </summary> /// <param name="ms"></param> /// <returns></returns> internal MemoryStream OnTransformCompleteStringInternal(MemoryStream ms) { if (TransformString == null) return ms; //string content = ms.GetAsString(); string content = HttpContext.Current.Response.ContentEncoding.GetString(ms.ToArray()); content = TransformString(content); byte[] buffer = HttpContext.Current.Response.ContentEncoding.GetBytes(content); ms = new MemoryStream(); ms.Write(buffer, 0, buffer.Length); //ms.WriteString(content); return ms; } /// <summary> /// /// </summary> public override bool CanRead { get { return true; } } public override bool CanSeek { get { return true; } } /// <summary> /// /// </summary> public override bool CanWrite { get { return true; } } /// <summary> /// /// </summary> public override long Length { get { return 0; } } /// <summary> /// /// </summary> public override long Position { get { return _position; } set { _position = value; } } /// <summary> /// /// </summary> /// <param name="offset"></param> /// <param name="direction"></param> /// <returns></returns> public override long Seek(long offset, System.IO.SeekOrigin direction) { return _stream.Seek(offset, direction); } /// <summary> /// /// </summary> /// <param name="length"></param> public override void SetLength(long length) { _stream.SetLength(length); } /// <summary> /// /// </summary> public override void Close() { _stream.Close(); } /// <summary> /// Override flush by writing out the cached stream data /// </summary> public override void Flush() { if (IsCaptured && _cacheStream.Length > 0) { // Check for transform implementations _cacheStream = OnTransformCompleteStream(_cacheStream); _cacheStream = OnTransformCompleteStringInternal(_cacheStream); OnCaptureStream(_cacheStream); OnCaptureStringInternal(_cacheStream); // write the stream back out if output was delayed if (IsOutputDelayed) _stream.Write(_cacheStream.ToArray(), 0, (int)_cacheStream.Length); // Clear the cache once we've written it out _cacheStream.SetLength(0); } // default flush behavior _stream.Flush(); } /// <summary> /// /// </summary> /// <param name="buffer"></param> /// <param name="offset"></param> /// <param name="count"></param> /// <returns></returns> public override int Read(byte[] buffer, int offset, int count) { return _stream.Read(buffer, offset, count); } /// <summary> /// Overriden to capture output written by ASP.NET and captured /// into a cached stream that is written out later when Flush() /// is called. /// </summary> /// <param name="buffer"></param> /// <param name="offset"></param> /// <param name="count"></param> public override void Write(byte[] buffer, int offset, int count) { if ( IsCaptured ) { // copy to holding buffer only - we'll write out later _cacheStream.Write(buffer, 0, count); _cachePointer += count; } // just transform this buffer if (TransformWrite != null) buffer = OnTransformWrite(buffer); if (TransformWriteString != null) buffer = OnTransformWriteStringInternal(buffer); if (!IsOutputDelayed) _stream.Write(buffer, offset, buffer.Length); } } The key features are the events and corresponding OnXXX methods that handle the event hookups, and the Write() and Flush() methods of the stream implementation. All the rest of the members tend to be plain jane passthrough stream implementation code without much consequence. I do love the way Action<t> and Func<T> make it so easy to create the event signatures for the various events – sweet. A few Things to consider Performance Response.Filter is not great for performance in general as it adds another layer of indirection to the ASP.NET output pipeline, and this implementation in particular adds a memory hit as it basically duplicates the response output into the cached memory stream which is necessary since you may have to look at the entire response. If you have large pages in particular this can cause potentially serious memory pressure in your server application. So be careful of wholesale adoption of this (or other) Response.Filters. Make sure to do some performance testing to ensure it’s not killing your app’s performance. Response.Filter works everywhere A few questions came up in comments and discussion as to capturing ALL output hitting the site and – yes you can definitely do that by assigning a Response.Filter inside of a module. If you do this however you’ll want to be very careful and decide which content you actually want to capture especially in IIS 7 which passes ALL content – including static images/CSS etc. through the ASP.NET pipeline. So it is important to filter only on what you’re looking for – like the page extension or maybe more effectively the Response.ContentType. Response.Filter Chaining Originally I thought that filter chaining doesn’t work at all due to a bug in the stream implementation code. But it’s quite possible to assign multiple filters to the Response.Filter property. So the following actually works to both compress the output and apply the transformed content: WebUtils.GZipEncodePage(); ResponseFilterStream filter = new ResponseFilterStream(Response.Filter); filter.TransformString += filter_TransformString; Response.Filter = filter; However the following does not work resulting in invalid content encoding errors: ResponseFilterStream filter = new ResponseFilterStream(Response.Filter); filter.TransformString += filter_TransformString; Response.Filter = filter; WebUtils.GZipEncodePage(); In other words multiple Response filters can work together but it depends entirely on the implementation whether they can be chained or in which order they can be chained. In this case running the GZip/Deflate stream filters apparently relies on the original content length of the output and chokes when the content is modified. But if attaching the compression first it works fine as unintuitive as that may seem. Resources Download example code Capture Output from ASP.NET Pages © Rick Strahl, West Wind Technologies, 2005-2010Posted in ASP.NET  

    Read the article

  • AclPermissionsFacet fault install SQL-2008-R2

    - by photo_tom
    While attempting to do an installation repair of SQL-2008R2, I'm failing the pre-check rules. Module that is failing is AclPermissionsFacet - with this message "The SQL Server registry keys from a prior installation cannot be modified. To continue, see SQL Server Setup documentation about how to fix registry keys." In the log file "Detail_GlobalRules.txt", I've been able to find the following error messages - 2010-09-05 07:24:39 Slp: Could not open sub key key HKEY_LOCAL_MACHINE\Software\Microsoft\Microsoft SQL Server\MSSQL10.MSSQLSERVER\MSSearch. 2010-09-05 07:24:39 Slp: Could not open sub key key HKEY_LOCAL_MACHINE\Software\Microsoft\Microsoft SQL Server\MSSQL10.MSSQLSERVER\SQLServerSCP. 2010-09-05 07:24:39 Slp: Could not open sub key key HKEY_LOCAL_MACHINE\Software\Microsoft\Microsoft SQL Server\MSSQL10.MSSQLSERVER\MSSQLServer. 2010-09-05 07:24:39 Slp: Could not open sub key key HKEY_LOCAL_MACHINE\Software\Microsoft\Microsoft SQL Server\MSSQL10.MSSQLSERVER\SQLServerAgent. When I look at these keys in the registry, all of their permissions are blank. My problem is that I cannot find any good information on how to reset these keys. This is on my new home dev and I think during the migration from my previous machine, these settings got corrupted on my new box. In reviewing the web, there doesn't seem to be good infomration. And what there is suggests using subinacl.exe. But after trying it and seeing it is an XP based program, I'm at a loss on how to continue. Configuration - Windows 7/64bit Home Edition, SQL2008R2, 6gb ram. Suggestions? Su

    Read the article

  • apache renew ssl not working [on hold]

    - by Varun S
    Downloaded a new ssl cert from go daddy and installed the cert on apache2 server put the cert in /etc/ssl/certs/ folder put the gd_bundle.crt in the /etc/ssl/ folder private key is in /etc/ssl/private/private.key I just replaced the original files with the new files, did not replace the private key. I restarted the server but the website is still showing old certificated date. What am I doing wrong and how do i resolve it ? my httpd.conf file is empty, the certificated config is in the sites-enabled/default-ssl file the server is apache2 running ubuntu 14.04 os SSLEngine on # A self-signed (snakeoil) certificate can be created by installing # the ssl-cert package. See # /usr/share/doc/apache2.2-common/README.Debian.gz for more info. # If both key and certificate are stored in the same file, only the # SSLCertificateFile directive is needed. SSLCertificateFile /etc/ssl/certs/2b1f6d308c2f9b.crt SSLCertificateKeyFile /etc/ssl/private/private.key # Server Certificate Chain: # Point SSLCertificateChainFile at a file containing the # concatenation of PEM encoded CA certificates which form the # certificate chain for the server certificate. Alternatively # the referenced file can be the same as SSLCertificateFile # when the CA certificates are directly appended to the server # certificate for convinience. SSLCertificateChainFile /etc/ssl/gd_bundle.crt -rwxr-xr-x 1 root root 1944 Aug 16 06:34 /etc/ssl/certs/2b1f6d308c2f9b.crt -rwxr-xr-x 1 root root 3197 Aug 16 06:10 /etc/ssl/gd_bundle.crt -rw-r--r-- 1 root root 1679 Oct 3 2013 /etc/ssl/private/private.key /etc/apache2/sites-available/default-ssl: # SSLCertificateFile directive is needed. /etc/apache2/sites-available/default-ssl: SSLCertificateFile /etc/ssl/certs/2b1f6d308c2f9b.crt /etc/apache2/sites-available/default-ssl: SSLCertificateKeyFile /etc/ssl/private/private.key /etc/apache2/sites-available/default-ssl: # Point SSLCertificateChainFile at a file containing the /etc/apache2/sites-available/default-ssl: # the referenced file can be the same as SSLCertificateFile /etc/apache2/sites-available/default-ssl: SSLCertificateChainFile /etc/ssl/gd_bundle.crt /etc/apache2/sites-enabled/default-ssl: # SSLCertificateFile directive is needed. /etc/apache2/sites-enabled/default-ssl: SSLCertificateFile /etc/ssl/certs/2b1f6d308c2f9b.crt /etc/apache2/sites-enabled/default-ssl: SSLCertificateKeyFile /etc/ssl/private/private.key /etc/apache2/sites-enabled/default-ssl: # Point SSLCertificateChainFile at a file containing the /etc/apache2/sites-enabled/default-ssl: # the referenced file can be the same as SSLCertificateFile /etc/apache2/sites-enabled/default-ssl: SSLCertificateChainFile /etc/ssl/gd_bundle.crt

    Read the article

  • Why RSA SSH authentication only works after console log-in?

    - by smorhaim
    I setup RSA authentication on one of my Ubuntu servers, however after every restart, I can't log-in via ssh RSA. In order to log-in with ssh I need to first log-in via console, then the RSA starts working. Why??? Below are my sshd config file as well as an output from the ssh -vv command before console log-in and after. . Before console log-in: debug1: SSH2_MSG_SERVICE_ACCEPT received debug2: key: /Users/smorhaim/.ssh/smorhaim (0x7ff8d8c242c0) debug2: key: /Users/smorhaim/.ssh/id_rsaadmin (0x7ff8d8c24cf0) debug1: Authentications that can continue: publickey debug1: Next authentication method: publickey debug1: Offering RSA public key: /Users/smorhaim/.ssh/smorhaim debug2: we sent a publickey packet, wait for reply debug1: Authentications that can continue: publickey debug1: Offering RSA public key: /Users/smorhaim/.ssh/id_rsaadmin debug2: we sent a publickey packet, wait for reply debug1: Authentications that can continue: publickey debug2: we did not send a packet, disable method debug1: No more authentication methods to try. Permission denied (publickey). After console log-in: debug1: SSH2_MSG_SERVICE_ACCEPT received debug2: key: /Users/smorhaim/.ssh/smorhaim (0x7f91c14242c0) debug2: key: /Users/smorhaim/.ssh/id_rsaadmin (0x7f91c1424ae0) debug1: Authentications that can continue: publickey debug1: Next authentication method: publickey debug1: Offering RSA public key: /Users/smorhaim/.ssh/smorhaim debug2: we sent a publickey packet, wait for reply debug1: Server accepts key: pkalg ssh-rsa blen 279 debug2: input_userauth_pk_ok: fp b1:d5:90:43:be:43:52:a9:7f:05:c7:04:86:57:b3:ff debug1: Authentication succeeded (publickey). Authenticated to 10.10.30.151 ([10.10.30.151]:22). sshd config: Port 22 Protocol 2 ListenAddress 10.10.30.151 UsePrivilegeSeparation yes SyslogFacility AUTHPRIV PermitRootLogin no PasswordAuthentication no ChallengeResponseAuthentication no UsePAM yes X11Forwarding yes

    Read the article

  • What happens when the USB key or SD card I've installed VMware ESXi on fails?

    - by ewwhite
    An SD (SDHC) card installed in an HP ProLiant DL380p Gen8 server running VMware ESXi just failed. I encountered some rather ominous looking messages on the vCenter console and in the HP ProLiant ILO event log... Lost connectivity to the device ... backing the boot filesystem. As a result, host configuration changes will not be saved to persistent storage. Embedded Flash/SD-CARD: Error writing media 0, physical block 848880: Stack Exception. VMware advocates the use of USB and SD (SDHC) boot devices for ESXi. It was one of the main reasons the smaller footprint ESXi was developed (versus the older ESX). I've spent much time highlighting the differences between ESXi's installable and embedded modes to coworkers and clients. However, these failures do seem to happen. In this case, this is my third instance. Luckily, this is a vSphere cluster with SAN storage. What steps should be taken to remediate this failure?

    Read the article

  • Problems setting up a VPN: can connect but can't ping anyone

    - by Fernando
    This is my first time setting a VPN. Clients can connect but can't ping other machines. This is certainly a route problem but i can't find the right way to configure it. Here is a sample example of the two LANS i want to connect: So, i want machines from 192.168.1.0/24 being able to connect with 192.168.0.0/24 as if they were on the same network. For the VPN network, i would like to use the 10.0.0.0/24 range. Here is my server.conf: proto udp port 1194 dev tun server 10.0.0.0 255.255.255.0 push "route 192.168.0.0 255.255.255.0 192.168.0.1" push "dhcp-option DNS 192.168.0.1" push "dhcp-option WINS 192.168.0.1" comp-lzo keepalive 10 120 float max-clients 10 persist-key persist-tun log-append /var/log/openvpn.log verb 6 tls-server dh /etc/openvpn/keys/dh1024.pem ca /etc/openvpn/keys/ca.crt cert /etc/openvpn/keys/server.crt key /etc/openvpn/keys/server.key tls-auth /etc/openvpn/keys/mykey.key 0 status /var/log/openvpn.stats And one of my clients 192.168.1.2: client dev tap proto udp remote my.no-ip.address 1194 route 192.168.1.0 255.0.0.0 192.168.1.1 3 resolv-retry infinite nobind persist-key persist-tun ca "C:\\Program Files\\OpenVPN\\easy-rsa\\keys\\ca.crt" cert "C:\\Program Files\\OpenVPN\\easy-rsa\\keys\\test1.crt" key "C:\\Program Files\\OpenVPN\\easy-rsa\\keys\\test1.key" tls-auth "C:\\Program Files\\OpenVPN\\easy-rsa\\keys\\mykey.key" 1 ns-cert-type server cipher BF-CBC comp-lzo verb 1 What exactly i am doing wrong? All machines can connect to openvpn but the ping doesn't work. At the client log i see the following error: Wed Feb 16 09:43:23 2011 OpenVPN ROUTE: OpenVPN needs a gateway parameter for a --route option and no default was specified by either --route-gateway or --ifconfig options Wed Feb 16 09:43:23 2011 OpenVPN ROUTE: failed to parse/resolve route for host/network: 10.0.0.1 Thanks!

    Read the article

  • How to register an agent with launchd

    - by Konrad Rudolph
    I’m unable to schedule a periodic launch with launchctl/launchd on OS X (Leopard). Basically, I’m unable to find a step-by-step list of instructions on the web and the intuitive approach doesn’t work. The sync.plist file: <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE plist PUBLIC "-//Apple Computer//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd"> <plist version="1.0"> <dict> <key>Label</key> <string>net.madrat.utils.sync</string> <key>Program</key> <string>rsync</string> <key>ProgramArguments</key> <array> <string>-ar</string> <string>/path/to/folder/</string> <string>/path/to/backup/</string> </array> <key>StartInterval</key> <integer>7200</integer> </dict> </plist> I’ve put this script inside the path ~/Library/LaunchAgents. Next, I’ve registered the script using launchctl load ~/Library/LaunchAgents/sync.plist Finally, to test that it works, I started the job: launchctl start net.madrat.utils.sync – Nothing happened. Manually executing the rsync command in the terminal yields the expected result. I’m fairly sure that the job was registered correctly because if I try to start a non-existing job, I get an error message (which I didn’t get in the above command). What did I do wrong?

    Read the article

  • how to program a special key on my laptop?

    - by daniel11
    ok so i have a set of touch screen buttons on the top of my laptop's keyboard, (one turns on power save mode, one disables/enables the mouse touch pad, one toggles the wifi adapter, and two turns the volume up/down) however theres another one that opened a file backup utility that came with my laptop. but since i bought my laptop , ive uninstalled that utility, rendering this button useless. So i was wondering if it was possible to reprogram that button to open another file or program on my computer? And if so what are the requirements and how would i go about doing it? Thanks in advance!

    Read the article

  • OS X 10.6 Apply ipfw rules at startup

    - by Michael Irey
    I have a couple of firewall rules I would to like to apply at startup. I have followed the instructions from http://images.apple.com/support/security/guides/docs/SnowLeopard_Security_Config_v10.6.pdf On page 192. However, the rules do not get applied at startup. I am running 10.6.8 NON Server Edition. I can however run: (Which applies the rules correctly) sudo ipfw /etc/ipfw.conf Which results in: 00100 fwd 127.0.0.1,8080 tcp from any to any dst-port 80 in 00200 fwd 127.0.0.1,8443 tcp from any to any dst-port 443 in 65535 allow ip from any to any Here is my /etc/ipfw.conf # To get real 80 and 443 while loading vagrant vbox add fwd localhost,8080 tcp from any to any 80 in add fwd localhost,8443 tcp from any to any 443 in Here is my /Library/LaunchDaemons/ipfw.plist <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE plist PUBLIC "-//Apple Computer//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd"> <plist version="1.0"> <dict> <key>Label</key> <string>ipfw</string> <key>Program</key> <string>/sbin/ipfw</string> <key>ProgramArguments</key> <array> <string>/sbin/ipfw</string> <string>/etc/ipfw.conf</string> </array> <key>RunAtLoad</key> <true /> </dict> </plist> The permissions of all the files seem to be appropriate: -rw-rw-r-- 1 root wheel 151 Oct 11 14:11 /etc/ipfw.conf -rw-rw-r-- 1 root wheel 438 Oct 11 14:09 /Library/LaunchDaemons/ipfw.plist Any thoughts or ideas on what could be wrong would be very helpful!

    Read the article

< Previous Page | 170 171 172 173 174 175 176 177 178 179 180 181  | Next Page >