Search Results

Search found 38328 results on 1534 pages for 'write xml'.

Page 223/1534 | < Previous Page | 219 220 221 222 223 224 225 226 227 228 229 230  | Next Page >

  • best way to write a-> ..->[a] recursive functions in haskell

    - by Roman A. Taycher
    So I keep having this small problem where I have something like func :: a -> b -> [a] -- or basically any a-> ...-> [a] where ... is any types -> func x y = func' [x] y -- as long as they are used to generate a list of [a] from x func' :: [a] -> b -> [a] func = undefined --situation dependant generates a list from each element and returns it as one long list should I keep it like this? should I use func' hidden by a where? should I only use the [a] - b - [a] version and leave the responsibility of passing [variable] to the callee? I might well need to compose these functions and might want to mess around with the order so I'm leaning towards option 3. What do you think?

    Read the article

  • Write transparent HTTP Proxy script in PHP

    - by Leo Izen
    Is there an easy forwarding/transparent php proxy script that I can host on my web server? These are my conditions: I'm using free web hosting, so I have pretty much no control over my machine. Otherwise I could use Perl's HTTP::Proxy module. This means no root password. It does run php though. I already have a server running on port 80. What I mean is I would like to put a php script as index.php on my server that will forward all requests. I don't want a script like PHProxy or Glype where I go to the site, then enter a URL. I want a server so I can enter proxy.example.com:80 in Firefox's or IE's or whatever's proxy settings and it will forward all requests to the server. Preferably (though not fatal if not possible) I would like for it to pass on the USER_AGENT environmental variable (That's the browser) instead of setting itself to be the USER_AGENT I can't start a new Daemon. My server won't allow it. Is there a script that will do this? If so, which?

    Read the article

  • Exclude notes based on attribute wildcard in XSL node selection

    - by C A
    Using cruisecontrol for continuous integration, I have some annoyances with Weblogic Ant tasks and how they think that server debug information are warnings rather than debug, so are shown in my build report emails. The XML output from cruise is similar to: <cruisecontrol> <build> <target name="compile-xxx"> <task name="xxx" /> </target> <target name="xxx.weblogic"> <task name="wldeploy"> <message priority="warn">Message which isn't really a warning"</message> </task> </target> </build> </cruisecontrol> In the cruisecontrol XSL template the current selection for the task list is: <xsl:variable name="tasklist" select="/cruisecontrol/build//target/task"/> What I would like is something which selects the tasklist in the same way, but doesn't include any target nodes which have the attribute name="*weblogic" where * is a wildcard. I have tried <xsl:variable name="tasklist" select="/cruisecontrol/build//target[@name!='*weblogic']/task"/> but this doesn't seem to have worked. I'm not an expert with XSLT, and just want to get this fixed so I can carry on the real development of the project. Any help is much appreciated.

    Read the article

  • Help to write regular expression

    - by Ockonal
    Hello, I have to get any text between: Final-Recipient: RFC822; !HERE! Action I need !HERE! from this example. There could be any string. I tried something like: $Pattern = '/Final-Recipient: RFC822; (.*) Action/'; But it doesn't work. upd Here is the string I'm trying to parse: http://dpaste.com/187638/

    Read the article

  • What to use to write tests? [Rails]

    - by yuval
    I asked a question about different testing frameworks yesterday. This question can be found here. Now that I have a better understanding of the different frameworks, I have a very simple question: With a basic understanding, but very limited experience with writing tests with rails' built in testing framework (basic assertions), would it be okay for me to jump directly to testing with RSpec, Webrat, and Cucamber? Thank you! As a side note: yes, this is an opinion based question, but I feel that the input received to this question is valuable enough to the community to keep this question open. Thanks.

    Read the article

  • xml filtering with python

    - by saminny
    Hi, I have a following xml document: <node1> <node2 a1="x1"> ... </node2> <node2 a1="x2"> ... </node2> <node2 a1="x1"> ... </node2> </node1> I want to filter out node2 when a1 = x2. The user provides the xpath and attribute values that need to tested and filtered out. I looked at some solutions in python like BeautifulSoup but they are too complicated and dont preserve the case of text. I want to keep the document same as before with some stuff filtered out. Can you recommend a simple and succinct solution? This should not be too complicated from the looks of it. The actual xml document is not as simple as above but idea is the same.

    Read the article

  • [C#] How do I make hierarchy of objects from two alternating classes?

    - by Millicent
    Here's the scenario: I have two classes ("Page" and "Field"), that are descended from a common class ("Pield"). They represent tags in an XML file and are in the following hierarchy: <page> <field> <page> ... </page> ... </field> ... </page> I.e.: Page and Field objects are in a hierarchy of alternating type (there may be more than one Page or Field to each rung of the hierarchy). Every Field and Page object has a parent property, which points to the respective parent object of the other type. This is not a problem unless the parent-child mechanism is controlled by the base class (Pield), which is shared by the two descended classes (Page and Field). Here is one try, that fails at the line "Pield child = new Pield(pchild, this);": class Pield<T> { private T _pield_parent; ... private void get_children() { ... Pield<Page> child = new Pield<Page>(pchild, this); ... } ... } class Page : Pield<Field> { ... } class Field : Pield<Page> { ... } Any ideas about how to solve this elegantly? Best, Millicent

    Read the article

  • Can JPA do batch update | put | write | insert as pm.makePersistentAll() does in GAE/J

    - by Kenyth
    I searched through multiple discussions here. Can someone just give me a quick and direct answer? And if with JPA you can't do a batch update, what if I don't use transaction, and just use the following flow: em = emf.getEntityManager // do some query // make some data modification em.persist(..) // do some query // make some data modification em.persist(..) // do some query // make some data modification em.persist(..) ... em.close() How does this compare to batch update with regard to performance, and compare to a single transaction commit, measured by RPC calls to datastore server, CPU cycles per request, or so. Does every call to em.persist(..) before em.close() trigger a RPC call to the datastore server? Thanks very much for any response!

    Read the article

  • how can i read the integer value from xml document in objective-c

    - by uttam
    this is my xml document, i want to use the value of id and mlsid in in my viewcontroller. how can i read it. <Table> <ID>1</ID> <MLSID>70980420</MLSID> <STREET_NO>776</STREET_NO> <STREET_NAME>Boylston</STREET_NAME> <AreaName>Back Bay</AreaName <Table> i have created one object file book which store the all value of xml data and then i put that objects in the array . now i am able to retrive the string value but not getting the integer value. this is my object class. @interface Book : NSObject { NSInteger ID; NSInteger MLSID //Same name as the Entity Name. } how to trieve the value of MLSID in the viewcontroller. or how to print it.

    Read the article

  • Using C# to read/write from excel spreadsheet.

    - by Aaron
    Hi there, I need to make a program that writes some data to an excel spreadsheet. Something basic along the lines of First name, last name, phone number, e-mail per row with each category in its own column. I don't even know where to start. If someone could tell me which assemblies to reference and maybe point me to a website or a book that covers writing/reading data from an excel spreadsheet via a c# program that would be great. Many thanks.

    Read the article

  • Get node parent of defined type using xpath

    - by IordanTanev
    Hi, i will give an example of the problem i have. My XML is like this <roor> <child Name = "child1"> <node> <element1>Value1</element1> <element2>Value2</element2> </node> </child> <child Name = "child2"> <element1>Value1</element1> <element2>Value2</element2> <element3>Value3</element3> </child> </root> I have xpath expression that returns all "element2" nodes. Then i want to for every node of type "element2" to find the node of type "child" that contains it. The problem is that between these two nodes there can be from 1 to n different nodes so i can't just use "..". Is there something like "//" that will look up instead of down Best Regards, Iordan

    Read the article

  • Write a completely fluid HTML page (using '%' instead of 'px' for EVERY element height/width)

    - by barak manos
    I am designing my HTML pages to be completely fluid: For every element in the mark-up (HTML), I am using 'style="height:%;width:%"' (instead of 'style="height:*px;width:*px"'). This approach seems to work pretty well, except for when changing the window measurements, in which case, the web page elements change their position and end up "on top of each other". I have come up with a pretty good run-time (java-script) solution to that: var elements = document.getElementsByTagName("*"); for (var i=0; i < elements.length; i++) { if (elements[i].style.height) elements[i].style.height = elements[i].offsetHeight+"px"; if (elements[i].style.width ) elements[i].style.width = elements[i].offsetWidth +"px"; } The only problem remaining is, that if the user opens up the website by entering the URL into a non-maximized window, then the page fits that portion of the window. Then, when maximizing the window, the page remains in its previous measurements. So in essence, I have solved the initial problem (when changing the window measurements), but only when the window is initially in its maximum size. Any ideas on how to tackle this problem? (given that I would like to keep my "% page-design" as is).

    Read the article

  • Return node level of hierarchical xml

    - by Ryan
    In a treeview you can retrieve the level of an item. I am trying to accomplish the same thing with the given input being an object. The XML data I will use for this example would be something like the following <?xml version="1.0" encoding="utf-8" ?> <Testing> <Numbers> <Number val="1"> <Number val="1.1"> <Number val="1.1.1"> <Number val="1.1.2" /> <Number val="1.1.3" /> <Number val="1.1.4" /> </Number> </Number> <Number val="1.2" /> <Number val="1.3" /> <Number val="1.4" /> </Number> <Number val="2" /> <Number val="3" /> <Number val="4" /> </Numbers> <Numbers> <Number val="5" /> <Number val="6" /> <Number val="7" /> <Number val="8" /> </Numbers> </Testing> This one is kicking my butt!

    Read the article

  • Write number N in base M

    - by VaioIsBorn
    I know how to do it mathematically, but i want it now to do it in c++ using some easy algorithm. Is is possible? The question is that i need some methods/ideas for writing a number N in base M, for example 1410 in base 3: (14)10 = 2*(3^0) + 1*(3^1) + 1*(3^2) = (112)3 etc.

    Read the article

  • create child nodes from sibling nodes until a different sibbling occurs.

    - by user364939
    Hi does anyone know what the xsl would look like to transform this XML. There can be N nte's after pid and N nte's after pv1. The structure is guaranteed in that all nte that follows pid belongs to pid and all nte following pv1 belongs to pv1. From: <pid> </pid> <nte> <nte-1>1</nte-1> <nte-3>Note 1</nte-1> </nte> <nte></nte> <pv1></pv1> <nte> </nte> into: <pid> <nte> <nte-1>1</nte-1> <nte-3>Note 1</nte-1> </nte> <nte> </nte> </pid> <pv1> <nte> </nte> </pv1> Thanks!

    Read the article

  • A way to "improve" write coding in vs2008 experience

    - by stighy
    Hi SO gurus. I would like to try to automize some recurrent job when i develop asp.net application. For example, for each <asp:button> I create, I would like to insert the classical code onomouseover="...something" onmouseout=" ..something-again.." Is there a way to automatically add this code "piece" in vs 2008? Some key combination to add "pre-ready" piece of code? Thank you

    Read the article

  • How to write a query get all infomation from one table to another one

    - by Dave
    I am building access database that will get data from a outside source and place it in a table that is link to the data source. As we all know that you are not allowed to recinfigure that linked table. What I want to do is take that data from that that linked table and make another table that I will be able to add additional new fields and snyc the out that gets put into the linked table. Please Help

    Read the article

  • Capturing and Transforming ASP.NET Output with Response.Filter

    - by Rick Strahl
    During one of my Handlers and Modules session at DevConnections this week one of the attendees asked a question that I didn’t have an immediate answer for. Basically he wanted to capture response output completely and then apply some filtering to the output – effectively injecting some additional content into the page AFTER the page had completely rendered. Specifically the output should be captured from anywhere – not just a page and have this code injected into the page. Some time ago I posted some code that allows you to capture ASP.NET Page output by overriding the Render() method, capturing the HtmlTextWriter() and reading its content, modifying the rendered data as text then writing it back out. I’ve actually used this approach on a few occasions and it works fine for ASP.NET pages. But this obviously won’t work outside of the Page class environment and it’s not really generic – you have to create a custom page class in order to handle the output capture. [updated 11/16/2009 – updated ResponseFilterStream implementation and a few additional notes based on comments] Enter Response.Filter However, ASP.NET includes a Response.Filter which can be used – well to filter output. Basically Response.Filter is a stream through which the OutputStream is piped back to the Web Server (indirectly). As content is written into the Response object, the filter stream receives the appropriate Stream commands like Write, Flush and Close as well as read operations although for a Response.Filter that’s uncommon to be hit. The Response.Filter can be programmatically replaced at runtime which allows you to effectively intercept all output generation that runs through ASP.NET. A common Example: Dynamic GZip Encoding A rather common use of Response.Filter hooking up code based, dynamic  GZip compression for requests which is dead simple by applying a GZipStream (or DeflateStream) to Response.Filter. The following generic routines can be used very easily to detect GZip capability of the client and compress response output with a single line of code and a couple of library helper routines: WebUtils.GZipEncodePage(); which is handled with a few lines of reusable code and a couple of static helper methods: /// <summary> ///Sets up the current page or handler to use GZip through a Response.Filter ///IMPORTANT:  ///You have to call this method before any output is generated! /// </summary> public static void GZipEncodePage() {     HttpResponse Response = HttpContext.Current.Response;     if(IsGZipSupported())     {         stringAcceptEncoding = HttpContext.Current.Request.Headers["Accept-Encoding"];         if(AcceptEncoding.Contains("deflate"))         {             Response.Filter = newSystem.IO.Compression.DeflateStream(Response.Filter,                                        System.IO.Compression.CompressionMode.Compress);             Response.AppendHeader("Content-Encoding", "deflate");         }         else        {             Response.Filter = newSystem.IO.Compression.GZipStream(Response.Filter,                                       System.IO.Compression.CompressionMode.Compress);             Response.AppendHeader("Content-Encoding", "gzip");                            }     }     // Allow proxy servers to cache encoded and unencoded versions separately    Response.AppendHeader("Vary", "Content-Encoding"); } /// <summary> /// Determines if GZip is supported /// </summary> /// <returns></returns> public static bool IsGZipSupported() { string AcceptEncoding = HttpContext.Current.Request.Headers["Accept-Encoding"]; if (!string.IsNullOrEmpty(AcceptEncoding) && (AcceptEncoding.Contains("gzip") || AcceptEncoding.Contains("deflate"))) return true; return false; } GZipStream and DeflateStream are streams that are assigned to Response.Filter and by doing so apply the appropriate compression on the active Response. Response.Filter content is chunked So to implement a Response.Filter effectively requires only that you implement a custom stream and handle the Write() method to capture Response output as it’s written. At first blush this seems very simple – you capture the output in Write, transform it and write out the transformed content in one pass. And that indeed works for small amounts of content. But you see, the problem is that output is written in small buffer chunks (a little less than 16k it appears) rather than just a single Write() statement into the stream, which makes perfect sense for ASP.NET to stream data back to IIS in smaller chunks to minimize memory usage en route. Unfortunately this also makes it a more difficult to implement any filtering routines since you don’t directly get access to all of the response content which is problematic especially if those filtering routines require you to look at the ENTIRE response in order to transform or capture the output as is needed for the solution the gentleman in my session asked for. So in order to address this a slightly different approach is required that basically captures all the Write() buffers passed into a cached stream and then making the stream available only when it’s complete and ready to be flushed. As I was thinking about the implementation I also started thinking about the few instances when I’ve used Response.Filter implementations. Each time I had to create a new Stream subclass and create my custom functionality but in the end each implementation did the same thing – capturing output and transforming it. I thought there should be an easier way to do this by creating a re-usable Stream class that can handle stream transformations that are common to Response.Filter implementations. Creating a semi-generic Response Filter Stream Class What I ended up with is a ResponseFilterStream class that provides a handful of Events that allow you to capture and/or transform Response content. The class implements a subclass of Stream and then overrides Write() and Flush() to handle capturing and transformation operations. By exposing events it’s easy to hook up capture or transformation operations via single focused methods. ResponseFilterStream exposes the following events: CaptureStream, CaptureString Captures the output only and provides either a MemoryStream or String with the final page output. Capture is hooked to the Flush() operation of the stream. TransformStream, TransformString Allows you to transform the complete response output with events that receive a MemoryStream or String respectively and can you modify the output then return it back as a return value. The transformed output is then written back out in a single chunk to the response output stream. These events capture all output internally first then write the entire buffer into the response. TransformWrite, TransformWriteString Allows you to transform the Response data as it is written in its original chunk size in the Stream’s Write() method. Unlike TransformStream/TransformString which operate on the complete output, these events only see the current chunk of data written. This is more efficient as there’s no caching involved, but can cause problems due to searched content splitting over multiple chunks. Using this implementation, creating a custom Response.Filter transformation becomes as simple as the following code. To hook up the Response.Filter using the MemoryStream version event: ResponseFilterStream filter = new ResponseFilterStream(Response.Filter); filter.TransformStream += filter_TransformStream; Response.Filter = filter; and the event handler to do the transformation: MemoryStream filter_TransformStream(MemoryStream ms) { Encoding encoding = HttpContext.Current.Response.ContentEncoding; string output = encoding.GetString(ms.ToArray()); output = FixPaths(output); ms = new MemoryStream(output.Length); byte[] buffer = encoding.GetBytes(output); ms.Write(buffer,0,buffer.Length); return ms; } private string FixPaths(string output) { string path = HttpContext.Current.Request.ApplicationPath; // override root path wonkiness if (path == "/") path = ""; output = output.Replace("\"~/", "\"" + path + "/").Replace("'~/", "'" + path + "/"); return output; } The idea of the event handler is that you can do whatever you want to the stream and return back a stream – either the same one that’s been modified or a brand new one – which is then sent back to as the final response. The above code can be simplified even more by using the string version events which handle the stream to string conversions for you: ResponseFilterStream filter = new ResponseFilterStream(Response.Filter); filter.TransformString += filter_TransformString; Response.Filter = filter; and the event handler to do the transformation calling the same FixPaths method shown above: string filter_TransformString(string output) { return FixPaths(output); } The events for capturing output and capturing and transforming chunks work in a very similar way. By using events to handle the transformations ResponseFilterStream becomes a reusable component and we don’t have to create a new stream class or subclass an existing Stream based classed. By the way, the example used here is kind of a cool trick which transforms “~/” expressions inside of the final generated HTML output – even in plain HTML controls not HTML controls – and transforms them into the appropriate application relative path in the same way that ResolveUrl would do. So you can write plain old HTML like this: <a href=”~/default.aspx”>Home</a>  and have it turned into: <a href=”/myVirtual/default.aspx”>Home</a>  without having to use an ASP.NET control like Hyperlink or Image or having to constantly use: <img src=”<%= ResolveUrl(“~/images/home.gif”) %>” /> in MVC applications (which frankly is one of the most annoying things about MVC especially given the path hell that extension-less and endpoint-less URLs impose). I can’t take credit for this idea. While discussing the Response.Filter issues on Twitter a hint from Dylan Beattie who pointed me at one of his examples which does something similar. I thought the idea was cool enough to use an example for future demos of Response.Filter functionality in ASP.NET next I time I do the Modules and Handlers talk (which was great fun BTW). How practical this is is debatable however since there’s definitely some overhead to using a Response.Filter in general and especially on one that caches the output and the re-writes it later. Make sure to test for performance anytime you use Response.Filter hookup and make sure it' doesn’t end up killing perf on you. You’ve been warned :-}. How does ResponseFilterStream work? The big win of this implementation IMHO is that it’s a reusable  component – so for implementation there’s no new class, no subclassing – you simply attach to an event to implement an event handler method with a straight forward signature to retrieve the stream or string you’re interested in. The implementation is based on a subclass of Stream as is required in order to handle the Response.Filter requirements. What’s different than other implementations I’ve seen in various places is that it supports capturing output as a whole to allow retrieving the full response output for capture or modification. The exception are the TransformWrite and TransformWrite events which operate only active chunk of data written by the Response. For captured output, the Write() method captures output into an internal MemoryStream that is cached until writing is complete. So Write() is called when ASP.NET writes to the Response stream, but the filter doesn’t pass on the Write immediately to the filter’s internal stream. The data is cached and only when the Flush() method is called to finalize the Stream’s output do we actually send the cached stream off for transformation (if the events are hooked up) and THEN finally write out the returned content in one big chunk. Here’s the implementation of ResponseFilterStream: /// <summary> /// A semi-generic Stream implementation for Response.Filter with /// an event interface for handling Content transformations via /// Stream or String. /// <remarks> /// Use with care for large output as this implementation copies /// the output into a memory stream and so increases memory usage. /// </remarks> /// </summary> public class ResponseFilterStream : Stream { /// <summary> /// The original stream /// </summary> Stream _stream; /// <summary> /// Current position in the original stream /// </summary> long _position; /// <summary> /// Stream that original content is read into /// and then passed to TransformStream function /// </summary> MemoryStream _cacheStream = new MemoryStream(5000); /// <summary> /// Internal pointer that that keeps track of the size /// of the cacheStream /// </summary> int _cachePointer = 0; /// <summary> /// /// </summary> /// <param name="responseStream"></param> public ResponseFilterStream(Stream responseStream) { _stream = responseStream; } /// <summary> /// Determines whether the stream is captured /// </summary> private bool IsCaptured { get { if (CaptureStream != null || CaptureString != null || TransformStream != null || TransformString != null) return true; return false; } } /// <summary> /// Determines whether the Write method is outputting data immediately /// or delaying output until Flush() is fired. /// </summary> private bool IsOutputDelayed { get { if (TransformStream != null || TransformString != null) return true; return false; } } /// <summary> /// Event that captures Response output and makes it available /// as a MemoryStream instance. Output is captured but won't /// affect Response output. /// </summary> public event Action<MemoryStream> CaptureStream; /// <summary> /// Event that captures Response output and makes it available /// as a string. Output is captured but won't affect Response output. /// </summary> public event Action<string> CaptureString; /// <summary> /// Event that allows you transform the stream as each chunk of /// the output is written in the Write() operation of the stream. /// This means that that it's possible/likely that the input /// buffer will not contain the full response output but only /// one of potentially many chunks. /// /// This event is called as part of the filter stream's Write() /// operation. /// </summary> public event Func<byte[], byte[]> TransformWrite; /// <summary> /// Event that allows you to transform the response stream as /// each chunk of bytep[] output is written during the stream's write /// operation. This means it's possibly/likely that the string /// passed to the handler only contains a portion of the full /// output. Typical buffer chunks are around 16k a piece. /// /// This event is called as part of the stream's Write operation. /// </summary> public event Func<string, string> TransformWriteString; /// <summary> /// This event allows capturing and transformation of the entire /// output stream by caching all write operations and delaying final /// response output until Flush() is called on the stream. /// </summary> public event Func<MemoryStream, MemoryStream> TransformStream; /// <summary> /// Event that can be hooked up to handle Response.Filter /// Transformation. Passed a string that you can modify and /// return back as a return value. The modified content /// will become the final output. /// </summary> public event Func<string, string> TransformString; protected virtual void OnCaptureStream(MemoryStream ms) { if (CaptureStream != null) CaptureStream(ms); } private void OnCaptureStringInternal(MemoryStream ms) { if (CaptureString != null) { string content = HttpContext.Current.Response.ContentEncoding.GetString(ms.ToArray()); OnCaptureString(content); } } protected virtual void OnCaptureString(string output) { if (CaptureString != null) CaptureString(output); } protected virtual byte[] OnTransformWrite(byte[] buffer) { if (TransformWrite != null) return TransformWrite(buffer); return buffer; } private byte[] OnTransformWriteStringInternal(byte[] buffer) { Encoding encoding = HttpContext.Current.Response.ContentEncoding; string output = OnTransformWriteString(encoding.GetString(buffer)); return encoding.GetBytes(output); } private string OnTransformWriteString(string value) { if (TransformWriteString != null) return TransformWriteString(value); return value; } protected virtual MemoryStream OnTransformCompleteStream(MemoryStream ms) { if (TransformStream != null) return TransformStream(ms); return ms; } /// <summary> /// Allows transforming of strings /// /// Note this handler is internal and not meant to be overridden /// as the TransformString Event has to be hooked up in order /// for this handler to even fire to avoid the overhead of string /// conversion on every pass through. /// </summary> /// <param name="responseText"></param> /// <returns></returns> private string OnTransformCompleteString(string responseText) { if (TransformString != null) TransformString(responseText); return responseText; } /// <summary> /// Wrapper method form OnTransformString that handles /// stream to string and vice versa conversions /// </summary> /// <param name="ms"></param> /// <returns></returns> internal MemoryStream OnTransformCompleteStringInternal(MemoryStream ms) { if (TransformString == null) return ms; //string content = ms.GetAsString(); string content = HttpContext.Current.Response.ContentEncoding.GetString(ms.ToArray()); content = TransformString(content); byte[] buffer = HttpContext.Current.Response.ContentEncoding.GetBytes(content); ms = new MemoryStream(); ms.Write(buffer, 0, buffer.Length); //ms.WriteString(content); return ms; } /// <summary> /// /// </summary> public override bool CanRead { get { return true; } } public override bool CanSeek { get { return true; } } /// <summary> /// /// </summary> public override bool CanWrite { get { return true; } } /// <summary> /// /// </summary> public override long Length { get { return 0; } } /// <summary> /// /// </summary> public override long Position { get { return _position; } set { _position = value; } } /// <summary> /// /// </summary> /// <param name="offset"></param> /// <param name="direction"></param> /// <returns></returns> public override long Seek(long offset, System.IO.SeekOrigin direction) { return _stream.Seek(offset, direction); } /// <summary> /// /// </summary> /// <param name="length"></param> public override void SetLength(long length) { _stream.SetLength(length); } /// <summary> /// /// </summary> public override void Close() { _stream.Close(); } /// <summary> /// Override flush by writing out the cached stream data /// </summary> public override void Flush() { if (IsCaptured && _cacheStream.Length > 0) { // Check for transform implementations _cacheStream = OnTransformCompleteStream(_cacheStream); _cacheStream = OnTransformCompleteStringInternal(_cacheStream); OnCaptureStream(_cacheStream); OnCaptureStringInternal(_cacheStream); // write the stream back out if output was delayed if (IsOutputDelayed) _stream.Write(_cacheStream.ToArray(), 0, (int)_cacheStream.Length); // Clear the cache once we've written it out _cacheStream.SetLength(0); } // default flush behavior _stream.Flush(); } /// <summary> /// /// </summary> /// <param name="buffer"></param> /// <param name="offset"></param> /// <param name="count"></param> /// <returns></returns> public override int Read(byte[] buffer, int offset, int count) { return _stream.Read(buffer, offset, count); } /// <summary> /// Overriden to capture output written by ASP.NET and captured /// into a cached stream that is written out later when Flush() /// is called. /// </summary> /// <param name="buffer"></param> /// <param name="offset"></param> /// <param name="count"></param> public override void Write(byte[] buffer, int offset, int count) { if ( IsCaptured ) { // copy to holding buffer only - we'll write out later _cacheStream.Write(buffer, 0, count); _cachePointer += count; } // just transform this buffer if (TransformWrite != null) buffer = OnTransformWrite(buffer); if (TransformWriteString != null) buffer = OnTransformWriteStringInternal(buffer); if (!IsOutputDelayed) _stream.Write(buffer, offset, buffer.Length); } } The key features are the events and corresponding OnXXX methods that handle the event hookups, and the Write() and Flush() methods of the stream implementation. All the rest of the members tend to be plain jane passthrough stream implementation code without much consequence. I do love the way Action<t> and Func<T> make it so easy to create the event signatures for the various events – sweet. A few Things to consider Performance Response.Filter is not great for performance in general as it adds another layer of indirection to the ASP.NET output pipeline, and this implementation in particular adds a memory hit as it basically duplicates the response output into the cached memory stream which is necessary since you may have to look at the entire response. If you have large pages in particular this can cause potentially serious memory pressure in your server application. So be careful of wholesale adoption of this (or other) Response.Filters. Make sure to do some performance testing to ensure it’s not killing your app’s performance. Response.Filter works everywhere A few questions came up in comments and discussion as to capturing ALL output hitting the site and – yes you can definitely do that by assigning a Response.Filter inside of a module. If you do this however you’ll want to be very careful and decide which content you actually want to capture especially in IIS 7 which passes ALL content – including static images/CSS etc. through the ASP.NET pipeline. So it is important to filter only on what you’re looking for – like the page extension or maybe more effectively the Response.ContentType. Response.Filter Chaining Originally I thought that filter chaining doesn’t work at all due to a bug in the stream implementation code. But it’s quite possible to assign multiple filters to the Response.Filter property. So the following actually works to both compress the output and apply the transformed content: WebUtils.GZipEncodePage(); ResponseFilterStream filter = new ResponseFilterStream(Response.Filter); filter.TransformString += filter_TransformString; Response.Filter = filter; However the following does not work resulting in invalid content encoding errors: ResponseFilterStream filter = new ResponseFilterStream(Response.Filter); filter.TransformString += filter_TransformString; Response.Filter = filter; WebUtils.GZipEncodePage(); In other words multiple Response filters can work together but it depends entirely on the implementation whether they can be chained or in which order they can be chained. In this case running the GZip/Deflate stream filters apparently relies on the original content length of the output and chokes when the content is modified. But if attaching the compression first it works fine as unintuitive as that may seem. Resources Download example code Capture Output from ASP.NET Pages © Rick Strahl, West Wind Technologies, 2005-2010Posted in ASP.NET  

    Read the article

  • Automating Solaris 11 Zones Installation Using The Automated Install Server

    - by Orgad Kimchi
    Introduction How to use the Oracle Solaris 11 Automated install server in order to automate the Solaris 11 Zones installation. In this document I will demonstrate how to setup the Automated Install server in order to provide hands off installation process for the Global Zone and two Non Global Zones located on the same system. Architecture layout: Figure 1. Architecture layout Prerequisite Setup the Automated install server (AI) using the following instructions “How to Set Up Automated Installation Services for Oracle Solaris 11” The first step in this setup will be creating two Solaris 11 Zones configuration files. Step 1: Create the Solaris 11 Zones configuration files  The Solaris Zones configuration files should be in the format of the zonecfg export command. # zonecfg -z zone1 export > /var/tmp/zone1# cat /var/tmp/zone1 create -b set brand=solaris set zonepath=/rpool/zones/zone1 set autoboot=true set ip-type=exclusive add anet set linkname=net0 set lower-link=auto set configure-allowed-address=true set link-protection=mac-nospoof set mac-address=random end  Create a backup copy of this file under a different name, for example, zone2. # cp /var/tmp/zone1 /var/tmp/zone2 Modify the second configuration file with the zone2 configuration information You should change the zonepath for example: set zonepath=/rpool/zones/zone2 Step2: Copy and share the Zones configuration files  Create the NFS directory for the Zones configuration files # mkdir /export/zone_config Share the directory for the Zones configuration file # share –o ro /export/zone_config Copy the Zones configuration files into the NFS shared directory # cp /var/tmp/zone1 /var/tmp/zone2  /export/zone_config Verify that the NFS share has been created using the following command # share export_zone_config      /export/zone_config     nfs     sec=sys,ro Step 3: Add the Global Zone as client to the Install Service Use the installadm create-client command to associate client (Global Zone) with the install service To find the MAC address of a system, use the dladm command as described in the dladm(1M) man page. The following command adds the client (Global Zone) with MAC address 0:14:4f:2:a:19 to the s11x86service install service. # installadm create-client -e “0:14:4f:2:a:19" -n s11x86service You can verify the client creation using the following command # installadm list –c Service Name  Client Address     Arch   Image Path ------------  --------------     ----   ---------- s11x86service 00:14:4F:02:0A:19  i386   /export/auto_install/s11x86service We can see the client install service name (s11x86service), MAC address (00:14:4F:02:0A:19 and Architecture (i386). Step 4: Global Zone manifest setup  First, get a list of the installation services and the manifests associated with them: # installadm list -m Service Name   Manifest        Status ------------   --------        ------ default-i386   orig_default   Default s11x86service  orig_default   Default Then probe the s11x86service and the default manifest associated with it. The -m switch reflects the name of the manifest associated with a service. Since we want to capture that output into a file, we redirect the output of the command as follows: # installadm export -n s11x86service -m orig_default >  /var/tmp/orig_default.xml Create a backup copy of this file under a different name, for example, orig-default2.xml, and edit the copy. # cp /var/tmp/orig_default.xml /var/tmp/orig_default2.xml Use the configuration element in the AI manifest for the client system to specify non-global zones. Use the name attribute of the configuration element to specify the name of the zone. Use the source attribute to specify the location of the config file for the zone.The source location can be any http:// or file:// location that the client can access during installation. The following sample AI manifest specifies two Non-Global Zones: zone1 and zone2 You should replace the server_ip with the ip address of the NFS server. <!DOCTYPE auto_install SYSTEM "file:///usr/share/install/ai.dtd.1"> <auto_install>   <ai_instance>     <target>       <logical>         <zpool name="rpool" is_root="true">           <filesystem name="export" mountpoint="/export"/>           <filesystem name="export/home"/>           <be name="solaris"/>         </zpool>       </logical>     </target>     <software type="IPS">       <source>         <publisher name="solaris">           <origin name="http://pkg.oracle.com/solaris/release"/>         </publisher>       </source>       <software_data action="install">         <name>pkg:/entire@latest</name>         <name>pkg:/group/system/solaris-large-server</name>       </software_data>     </software>     <configuration type="zone" name="zone1" source="file:///net/server_ip/export/zone_config/zone1"/>     <configuration type="zone" name="zone2" source="file:///net/server_ip/export/zone_config/zone2"/>   </ai_instance> </auto_install> The following example adds the /var/tmp/orig_default2.xml AI manifest to the s11x86service install service # installadm create-manifest -n s11x86service -f /var/tmp/orig_default2.xml -m gzmanifest You can verify the manifest creation using the following command # installadm list -n s11x86service  -m Service/Manifest Name  Status   Criteria ---------------------  ------   -------- s11x86service    orig_default        Default  None    gzmanifest          Inactive None We can see from the command output that the new manifest named gzmanifest has been created and associated with the s11x86service install service. Step 5: Non Global Zone manifest setup The AI manifest for non-global zone installation is similar to the AI manifest for installing the global zone. If you do not provide a custom AI manifest for a non-global zone, the default AI manifest for Zones is used The default AI manifest for Zones is available at /usr/share/auto_install/manifest/zone_default.xml. In this example we should use the default AI manifest for zones The following sample default AI manifest for zones # cat /usr/share/auto_install/manifest/zone_default.xml <?xml version="1.0" encoding="UTF-8"?> <!--  Copyright (c) 2011, 2012, Oracle and/or its affiliates. All rights reserved. --> <!DOCTYPE auto_install SYSTEM "file:///usr/share/install/ai.dtd.1"> <auto_install>     <ai_instance name="zone_default">         <target>             <logical>                 <zpool name="rpool">                     <!--                       Subsequent <filesystem> entries instruct an installer                       to create following ZFS datasets:                           <root_pool>/export         (mounted on /export)                           <root_pool>/export/home    (mounted on /export/home)                       Those datasets are part of standard environment                       and should be always created.                       In rare cases, if there is a need to deploy a zone                       without these datasets, either comment out or remove                       <filesystem> entries. In such scenario, it has to be also                       assured that in case of non-interactive post-install                       configuration, creation of initial user account is                       disabled in related system configuration profile.                       Otherwise the installed zone would fail to boot.                     -->                     <filesystem name="export" mountpoint="/export"/>                     <filesystem name="export/home"/>                     <be name="solaris">                         <options>                             <option name="compression" value="on"/>                         </options>                     </be>                 </zpool>             </logical>         </target>         <software type="IPS">             <destination>                              </destination>             <software_data action="install">                 <name>pkg:/group/system/solaris-small-server</name>             </software_data>         </software>     </ai_instance> </auto_install> (optional) We can customize the default AI manifest for Zones Create a backup copy of this file under a different name, for example, zone_default2.xml and edit the copy # cp /usr/share/auto_install/manifest/zone_default.xml /var/tmp/zone_default2.xml Edit the copy (/var/tmp/zone_default2.xml) The following example adds the /var/tmp/zone_default2.xml AI manifest to the s11x86service install service and specifies that zone1 and zone2 should use this manifest. # installadm create-manifest -n s11x86service -f /var/tmp/zone_default2.xml -m zones_manifest -c zonename="zone1 zone2" Note: Do not use the following elements or attributes in a non-global zone AI manifest:     The auto_reboot attribute of the ai_instance element     The http_proxy attribute of the ai_instance element     The disk child element of the target element     The noswap attribute of the logical element     The nodump attribute of the logical element     The configuration element Step 6: Global Zone profile setup We are going to create a global zone configuration profile which includes the host information for example: host name, ip address name services etc… # sysconfig create-profile –o /var/tmp/gz_profile.xml You need to provide the host information for example:     Default router     Root password     DNS information The output should eventually disappear and be replaced by the initial screen of the System Configuration Tool (see Figure 2), where you can do the final configuration. Figure 2. Profile creation menu You can validate the profile using the following command # installadm validate -n s11x86service –P /var/tmp/gz_profile.xml Validating static profile gz_profile.xml...  Passed Next, instantiate a profile with the install service. In our case, use the following syntax for doing this # installadm create-profile -n s11x86service  -f /var/tmp/gz_profile.xml -p  gz_profile You can verify profile creation using the following command # installadm list –n s11x86service  -p Service/Profile Name  Criteria --------------------  -------- s11x86service    gz_profile         None We can see that the gz_profie has been created and associated with the s11x86service Install service. Step 7: Setup the Solaris Zones configuration profiles The step should be similar to the Global zone profile creation on step 6 # sysconfig create-profile –o /var/tmp/zone1_profile.xml # sysconfig create-profile –o /var/tmp/zone2_profile.xml You can validate the profiles using the following command # installadm validate -n s11x86service -P /var/tmp/zone1_profile.xml Validating static profile zone1_profile.xml...  Passed # installadm validate -n s11x86service -P /var/tmp/zone2_profile.xml Validating static profile zone2_profile.xml...  Passed Next, associate the profiles with the install service The following example adds the zone1_profile.xml configuration profile to the s11x86service  install service and specifies that zone1 should use this profile. # installadm create-profile -n s11x86service  -f  /var/tmp/zone1_profile.xml -p zone1_profile -c zonename=zone1 The following example adds the zone2_profile.xml configuration profile to the s11x86service  install service and specifies that zone2 should use this profile. # installadm create-profile -n s11x86service  -f  /var/tmp/zone2_profile.xml -p zone2_profile -c zonename=zone2 You can verify the profiles creation using the following command # installadm list -n s11x86service -p Service/Profile Name  Criteria --------------------  -------- s11x86service    zone1_profile      zonename = zone1    zone2_profile      zonename = zone2    gz_profile         None We can see that we have three profiles in the s11x86service  install service     Global Zone  gz_profile     zone1            zone1_profile     zone2            zone2_profile. Step 8: Global Zone setup Associate the global zone client with the manifest and the profile that we create in the previous steps The following example adds the manifest and profile to the client (global zone), where: gzmanifest  is the name of the manifest. gz_profile  is the name of the configuration profile. mac="0:14:4f:2:a:19" is the client (global zone) mac address s11x86service is the install service name. # installadm set-criteria -m  gzmanifest  –p  gz_profile  -c mac="0:14:4f:2:a:19" -n s11x86service You can verify the manifest and profile association using the following command # installadm list -n s11x86service -p  -m Service/Manifest Name  Status   Criteria ---------------------  ------   -------- s11x86service    gzmanifest                   mac  = 00:14:4F:02:0A:19    orig_default        Default  None Service/Profile Name  Criteria --------------------  -------- s11x86service    gz_profile         mac      = 00:14:4F:02:0A:19    zone2_profile      zonename = zone2    zone1_profile      zonename = zone1 Step 9: Provision the host with the Non-Global Zones The next step is to boot the client system off the network and provision it using the Automated Install service that we just set up. First, boot the client system. Figure 3 shows the network boot attempt (when done on an x86 system): Figure 3. Network Boot Then you will be prompted by a GRUB menu, with a timer, as shown in Figure 4. The default selection (the "Text Installer and command line" option) is highlighted.  Press the down arrow to highlight the second option labeled Automated Install, and then press Enter. The reason we need to do this is because we want to prevent a system from being automatically re-installed if it were to be booted from the network accidentally. Figure 4. GRUB Menu What follows is the continuation of a networked boot from the Automated Install server,. The client downloads a mini-root (a small set of files in which to successfully run the installer), identifies the location of the Automated Install manifest on the network, retrieves that manifest, and then processes it to identify the address of the IPS repository from which to obtain the desired software payload. Non-Global Zones are installed and configured on the first reboot after the Global Zone is installed. You can list all the Solaris Zones status using the following command # zoneadm list -civ Once the Zones are in running state you can login into the Zone using the following command # zlogin –z zone1 Troubleshooting Automated Installations If an installation to a client system failed, you can find the client log at /system/volatile/install_log. NOTE: Zones are not installed if any of the following errors occurs:     A zone config file is not syntactically correct.     A collision exists among zone names, zone paths, or delegated ZFS datasets in the set of zones to be installed     Required datasets are not configured in the global zone. For more troubleshooting information see “Installing Oracle Solaris 11 Systems” Conclusion This paper demonstrated the benefits of using the Automated Install server to simplify the Non Global Zones setup, including the creation and configuration of the global zone manifest and the Solaris Zones profiles.

    Read the article

  • log4j.xml configuration with <rollingPolicy> and <triggeringPolicy>

    - by Mike Smith
    I try to configure log4j.xml in such a way that file will be rolled upon file size, and the rolled file's name will be i.e: "C:/temp/test/test_log4j-%d{yyyy-MM-dd-HH_mm_ss}.log" I followed this discussion: http://web.archiveorange.com/archive/v/NUYyjJipzkDOS3reRiMz Finally it worked for me only when I add: try { Thread.sleep(1); } catch (InterruptedException e) { e.printStackTrace(); } to the method: public boolean isTriggeringEvent(Appender appender, LoggingEvent event, String filename, long fileLength) which make it works. The question is if there is a better way to make it work? since this method call many times and slow my program. Here is the code: package com.mypack.rolling; import org.apache.log4j.rolling.RollingPolicy; import org.apache.log4j.rolling.RolloverDescription; import org.apache.log4j.rolling.TimeBasedRollingPolicy; /** * Same as org.apache.log4j.rolling.TimeBasedRollingPolicy but acts only as * RollingPolicy and NOT as TriggeringPolicy. * * This allows us to combine this class with a size-based triggering policy * (decision to roll based on size, name of rolled files based on time) * */ public class CustomTimeBasedRollingPolicy implements RollingPolicy { TimeBasedRollingPolicy timeBasedRollingPolicy = new TimeBasedRollingPolicy(); /** * Set file name pattern. * @param fnp file name pattern. */ public void setFileNamePattern(String fnp) { timeBasedRollingPolicy.setFileNamePattern(fnp); } /* public void setActiveFileName(String fnp) { timeBasedRollingPolicy.setActiveFileName(fnp); }*/ /** * Get file name pattern. * @return file name pattern. */ public String getFileNamePattern() { return timeBasedRollingPolicy.getFileNamePattern(); } public RolloverDescription initialize(String file, boolean append) throws SecurityException { return timeBasedRollingPolicy.initialize(file, append); } public RolloverDescription rollover(String activeFile) throws SecurityException { return timeBasedRollingPolicy.rollover(activeFile); } public void activateOptions() { timeBasedRollingPolicy.activateOptions(); } } package com.mypack.rolling; import org.apache.log4j.helpers.OptionConverter; import org.apache.log4j.Appender; import org.apache.log4j.rolling.TriggeringPolicy; import org.apache.log4j.spi.LoggingEvent; import org.apache.log4j.spi.OptionHandler; /** * Copy of org.apache.log4j.rolling.SizeBasedTriggeringPolicy but able to accept * a human-friendly value for maximumFileSize, eg. "10MB" * * Note that sub-classing SizeBasedTriggeringPolicy is not possible because that * class is final */ public class CustomSizeBasedTriggeringPolicy implements TriggeringPolicy, OptionHandler { /** * Rollover threshold size in bytes. */ private long maximumFileSize = 10 * 1024 * 1024; // let 10 MB the default max size /** * Set the maximum size that the output file is allowed to reach before * being rolled over to backup files. * * <p> * In configuration files, the <b>MaxFileSize</b> option takes an long * integer in the range 0 - 2^63. You can specify the value with the * suffixes "KB", "MB" or "GB" so that the integer is interpreted being * expressed respectively in kilobytes, megabytes or gigabytes. For example, * the value "10KB" will be interpreted as 10240. * * @param value * the maximum size that the output file is allowed to reach */ public void setMaxFileSize(String value) { maximumFileSize = OptionConverter.toFileSize(value, maximumFileSize + 1); } public long getMaximumFileSize() { return maximumFileSize; } public void setMaximumFileSize(long maximumFileSize) { this.maximumFileSize = maximumFileSize; } public void activateOptions() { } public boolean isTriggeringEvent(Appender appender, LoggingEvent event, String filename, long fileLength) { try { Thread.sleep(1); } catch (InterruptedException e) { e.printStackTrace(); } boolean result = (fileLength >= maximumFileSize); return result; } } and the log4j.xml: <?xml version="1.0" encoding="UTF-8" ?> <!DOCTYPE log4j:configuration SYSTEM "log4j.dtd"> <log4j:configuration xmlns:log4j="http://jakarta.apache.org/log4j/" debug="true"> <appender name="console" class="org.apache.log4j.ConsoleAppender"> <param name="Target" value="System.out" /> <layout class="org.apache.log4j.PatternLayout"> <param name="ConversionPattern" value="%d [%t] %-5p %c -> %m%n" /> </layout> </appender> <appender name="FILE" class="org.apache.log4j.rolling.RollingFileAppender"> <param name="file" value="C:/temp/test/test_log4j.log" /> <rollingPolicy class="com.mypack.rolling.CustomTimeBasedRollingPolicy"> <param name="fileNamePattern" value="C:/temp/test/test_log4j-%d{yyyy-MM-dd-HH_mm_ss}.log" /> </rollingPolicy> <triggeringPolicy class="com.mypack.rolling.CustomSizeBasedTriggeringPolicy"> <param name="MaxFileSize" value="200KB" /> </triggeringPolicy> <layout class="org.apache.log4j.PatternLayout"> <param name="ConversionPattern" value="%d [%t] %-5p %c -> %m%n" /> </layout> </appender> <logger name="com.mypack.myrun" additivity="false"> <level value="debug" /> <appender-ref ref="FILE" /> </logger> <root> <priority value="debug" /> <appender-ref ref="console" /> </root> </log4j:configuration>

    Read the article

< Previous Page | 219 220 221 222 223 224 225 226 227 228 229 230  | Next Page >