Search Results

Search found 26969 results on 1079 pages for 'prevent default'.

Page 572/1079 | < Previous Page | 568 569 570 571 572 573 574 575 576 577 578 579  | Next Page >

  • Drupal: png is not allowed extension (but it is!)

    - by Patrick
    hi, I'm running Drupal on IIS server 6. I'm having an interesting error to upload images into my node. I get "png" is not a known file. See image: http://dl.dropbox.com/u/72686/pngError.png But I've added png (default settings) to the allow image files list, and indeed you can see in the same image, that png is an allowed extension. Thanks

    Read the article

  • Error #1069 in flash...

    - by sabuj
    Error #1069: Property data not found on flash.display.SimpleButton and there is no default value. Can anyone give me a suggestion what will be the solution of this problem.

    Read the article

  • Change query to use a LEFT join

    - by Craig
    I have a query which is failing, as it needs to be using LEFT JOIN, as opposed to the default INNER JOIN used by the 'join' syntax: var users = (from u in this._context.Users join p in this._context.Profiles on u.ProfileID equals p.ID join vw in this._context.vw_Contacts on u.ContactID equals vw.ID orderby u.Code select new { ID = u.ID, profileId = p.ID, u.ContactID, u.Code, u.UserName, vw.FileAs, p.Name, u.LastLogout, u.Inactive, u.Disabled }).ToList(); How would i re-write this so that is utilises a LEFT join?

    Read the article

  • PowerPoint Paste HTML Loss of Color [closed]

    - by Tim
    I am trying to paste HTML into powerpoint 2007. Everything works ok except that I lose the color of the text and the font. I am using the paste special method selecting html. Now I have read that some people have fixed the color loss problem by setting a color printer as their default. But that does not seem to be working for me nor would it fix the font. Thank you for any help.

    Read the article

  • requestFocus() in java?

    - by Venkats
    In JTable, if you are inserting values to that row of 4 in first column, Then the focus goes to first row and second column by default. I want to focus the next column of the same row( ie., row 4). How to set requestFocus() in JTable by row wise.?

    Read the article

  • Storing and displaying unicode string (??????) using PHP and MySQL

    - by Anirudh Goel
    I have to store hindi text in a MySQL database, fetch it using a PHP script and display it on a webpage. I did the following: I created a database and set its encoding to UTF-8 and also the collation to utf8_bin. I added a varchar field in the table and set it to accept UTF-8 text in the charset property. Then I set about adding data to it. Here I had to copy data from an existing site. The hindi text looks like this: ????????:05:30 I directly copied this text into my database and used the PHP code echo(utf8_encode($string)) to display the data. Upon doing so the browser showed me "??????". When I inserted the UTF equivalent of the text by going to "view source" in the browser, however, ???????? translates into &#2360;&#2370;&#2352;&#2381;&#2351;&#2379;&#2342;&#2351;. If I enter and store &#2360;&#2370;&#2352;&#2381;&#2351;&#2379;&#2342;&#2351; in the database, it converts perfectly. So what I want to know is how I can directly store ???????? into my database and fetch it and display it in my webpage using PHP. Also, can anyone help me understand if there's a script which when I type in ????????, gives me &#2360;&#2370;&#2352;&#2381;&#2351;&#2379;&#2342;&#2351;? Solution Found I wrote the following sample script which worked for me. Hope it helps someone else too <html> <head> <title>Hindi</title></head> <body> <?php include("connection.php"); //simple connection setting $result = mysql_query("SET NAMES utf8"); //the main trick $cmd = "select * from hindi"; $result = mysql_query($cmd); while ($myrow = mysql_fetch_row($result)) { echo ($myrow[0]); } ?> </body> </html> The dump for my database storing hindi utf strings is CREATE TABLE `hindi` ( `data` varchar(1000) character set utf8 collate utf8_bin default NULL ) ENGINE=InnoDB DEFAULT CHARSET=latin1; INSERT INTO `hindi` VALUES ('????????'); Now my question is, how did it work without specifying "META" or header info? Thanks!

    Read the article

  • Running lmgrd on ubuntu 14.04 LTS

    - by SumanBhatR
    I have installed Xilinx 14.7 in ubuntu 14.04 LTS machine(i386 - 64bit). But I am unable to run lmgrd (for starting the license server). When I googled this problem, I found that lsb-core package needs to be installed. But the package is having many dependencies, I want to know how to install lsb-core package with all the necessary dependencies. Thanks for the help On running sudo apt-get install lsb-core I got the following output Reading package lists... Done Building dependency tree Reading state information... Done Package lsb-core is not available, but is referred to by another package. This may mean that the package is missing, has been obsoleted, or is only available from another source E: Package 'lsb-core' has no installation candidate So I downloaded lsb-core package from http://packages.ubuntu.com/trusty/misc/lsb-core site and used "sudo dpkg -i ./lsb-core_4.1+Debian11ubuntu6_i386.deb" to install it By doing it, I got the following output Selecting previously unselected package lsb-core. (Reading database ... 163205 files and directories currently installed.) Preparing to unpack .../lsb-core_4.1+Debian11ubuntu6_i386.deb ... Unpacking lsb-core (4.1+Debian11ubuntu6) ... dpkg: dependency problems prevent configuration of lsb-core: lsb-core depends on libc6 ( 2.3.5). lsb-core depends on libz1. lsb-core depends on libncurses5. lsb-core depends on libpam0g. lsb-core depends on lsb-invalid-mta (= 4.1+Debian11ubuntu6) | mail-transport-agent. lsb-core depends on at. lsb-core depends on binutils. lsb-core depends on cron | cron-daemon. lsb-core depends on libc6-dev | libc-dev. lsb-core depends on locales. lsb-core depends on m4. lsb-core depends on mailx | mailutils. lsb-core depends on ncurses-term. lsb-core depends on pax. lsb-core depends on psmisc. lsb-core depends on alien (= 8.36). lsb-core depends on python3. lsb-core depends on lsb-security (= 4.1+Debian11ubuntu6). lsb-core depends on time. dpkg: error processing package lsb-core (--install): dependency problems - leaving unconfigured Processing triggers for man-db (2.6.7.1-1) ... Errors were encountered while processing: lsb-core So I want to know how to install lsb-core package with all the necessary dependencies in one go. Thanks for the help

    Read the article

  • Capturing and Transforming ASP.NET Output with Response.Filter

    - by Rick Strahl
    During one of my Handlers and Modules session at DevConnections this week one of the attendees asked a question that I didn’t have an immediate answer for. Basically he wanted to capture response output completely and then apply some filtering to the output – effectively injecting some additional content into the page AFTER the page had completely rendered. Specifically the output should be captured from anywhere – not just a page and have this code injected into the page. Some time ago I posted some code that allows you to capture ASP.NET Page output by overriding the Render() method, capturing the HtmlTextWriter() and reading its content, modifying the rendered data as text then writing it back out. I’ve actually used this approach on a few occasions and it works fine for ASP.NET pages. But this obviously won’t work outside of the Page class environment and it’s not really generic – you have to create a custom page class in order to handle the output capture. [updated 11/16/2009 – updated ResponseFilterStream implementation and a few additional notes based on comments] Enter Response.Filter However, ASP.NET includes a Response.Filter which can be used – well to filter output. Basically Response.Filter is a stream through which the OutputStream is piped back to the Web Server (indirectly). As content is written into the Response object, the filter stream receives the appropriate Stream commands like Write, Flush and Close as well as read operations although for a Response.Filter that’s uncommon to be hit. The Response.Filter can be programmatically replaced at runtime which allows you to effectively intercept all output generation that runs through ASP.NET. A common Example: Dynamic GZip Encoding A rather common use of Response.Filter hooking up code based, dynamic  GZip compression for requests which is dead simple by applying a GZipStream (or DeflateStream) to Response.Filter. The following generic routines can be used very easily to detect GZip capability of the client and compress response output with a single line of code and a couple of library helper routines: WebUtils.GZipEncodePage(); which is handled with a few lines of reusable code and a couple of static helper methods: /// <summary> ///Sets up the current page or handler to use GZip through a Response.Filter ///IMPORTANT:  ///You have to call this method before any output is generated! /// </summary> public static void GZipEncodePage() {     HttpResponse Response = HttpContext.Current.Response;     if(IsGZipSupported())     {         stringAcceptEncoding = HttpContext.Current.Request.Headers["Accept-Encoding"];         if(AcceptEncoding.Contains("deflate"))         {             Response.Filter = newSystem.IO.Compression.DeflateStream(Response.Filter,                                        System.IO.Compression.CompressionMode.Compress);             Response.AppendHeader("Content-Encoding", "deflate");         }         else        {             Response.Filter = newSystem.IO.Compression.GZipStream(Response.Filter,                                       System.IO.Compression.CompressionMode.Compress);             Response.AppendHeader("Content-Encoding", "gzip");                            }     }     // Allow proxy servers to cache encoded and unencoded versions separately    Response.AppendHeader("Vary", "Content-Encoding"); } /// <summary> /// Determines if GZip is supported /// </summary> /// <returns></returns> public static bool IsGZipSupported() { string AcceptEncoding = HttpContext.Current.Request.Headers["Accept-Encoding"]; if (!string.IsNullOrEmpty(AcceptEncoding) && (AcceptEncoding.Contains("gzip") || AcceptEncoding.Contains("deflate"))) return true; return false; } GZipStream and DeflateStream are streams that are assigned to Response.Filter and by doing so apply the appropriate compression on the active Response. Response.Filter content is chunked So to implement a Response.Filter effectively requires only that you implement a custom stream and handle the Write() method to capture Response output as it’s written. At first blush this seems very simple – you capture the output in Write, transform it and write out the transformed content in one pass. And that indeed works for small amounts of content. But you see, the problem is that output is written in small buffer chunks (a little less than 16k it appears) rather than just a single Write() statement into the stream, which makes perfect sense for ASP.NET to stream data back to IIS in smaller chunks to minimize memory usage en route. Unfortunately this also makes it a more difficult to implement any filtering routines since you don’t directly get access to all of the response content which is problematic especially if those filtering routines require you to look at the ENTIRE response in order to transform or capture the output as is needed for the solution the gentleman in my session asked for. So in order to address this a slightly different approach is required that basically captures all the Write() buffers passed into a cached stream and then making the stream available only when it’s complete and ready to be flushed. As I was thinking about the implementation I also started thinking about the few instances when I’ve used Response.Filter implementations. Each time I had to create a new Stream subclass and create my custom functionality but in the end each implementation did the same thing – capturing output and transforming it. I thought there should be an easier way to do this by creating a re-usable Stream class that can handle stream transformations that are common to Response.Filter implementations. Creating a semi-generic Response Filter Stream Class What I ended up with is a ResponseFilterStream class that provides a handful of Events that allow you to capture and/or transform Response content. The class implements a subclass of Stream and then overrides Write() and Flush() to handle capturing and transformation operations. By exposing events it’s easy to hook up capture or transformation operations via single focused methods. ResponseFilterStream exposes the following events: CaptureStream, CaptureString Captures the output only and provides either a MemoryStream or String with the final page output. Capture is hooked to the Flush() operation of the stream. TransformStream, TransformString Allows you to transform the complete response output with events that receive a MemoryStream or String respectively and can you modify the output then return it back as a return value. The transformed output is then written back out in a single chunk to the response output stream. These events capture all output internally first then write the entire buffer into the response. TransformWrite, TransformWriteString Allows you to transform the Response data as it is written in its original chunk size in the Stream’s Write() method. Unlike TransformStream/TransformString which operate on the complete output, these events only see the current chunk of data written. This is more efficient as there’s no caching involved, but can cause problems due to searched content splitting over multiple chunks. Using this implementation, creating a custom Response.Filter transformation becomes as simple as the following code. To hook up the Response.Filter using the MemoryStream version event: ResponseFilterStream filter = new ResponseFilterStream(Response.Filter); filter.TransformStream += filter_TransformStream; Response.Filter = filter; and the event handler to do the transformation: MemoryStream filter_TransformStream(MemoryStream ms) { Encoding encoding = HttpContext.Current.Response.ContentEncoding; string output = encoding.GetString(ms.ToArray()); output = FixPaths(output); ms = new MemoryStream(output.Length); byte[] buffer = encoding.GetBytes(output); ms.Write(buffer,0,buffer.Length); return ms; } private string FixPaths(string output) { string path = HttpContext.Current.Request.ApplicationPath; // override root path wonkiness if (path == "/") path = ""; output = output.Replace("\"~/", "\"" + path + "/").Replace("'~/", "'" + path + "/"); return output; } The idea of the event handler is that you can do whatever you want to the stream and return back a stream – either the same one that’s been modified or a brand new one – which is then sent back to as the final response. The above code can be simplified even more by using the string version events which handle the stream to string conversions for you: ResponseFilterStream filter = new ResponseFilterStream(Response.Filter); filter.TransformString += filter_TransformString; Response.Filter = filter; and the event handler to do the transformation calling the same FixPaths method shown above: string filter_TransformString(string output) { return FixPaths(output); } The events for capturing output and capturing and transforming chunks work in a very similar way. By using events to handle the transformations ResponseFilterStream becomes a reusable component and we don’t have to create a new stream class or subclass an existing Stream based classed. By the way, the example used here is kind of a cool trick which transforms “~/” expressions inside of the final generated HTML output – even in plain HTML controls not HTML controls – and transforms them into the appropriate application relative path in the same way that ResolveUrl would do. So you can write plain old HTML like this: <a href=”~/default.aspx”>Home</a>  and have it turned into: <a href=”/myVirtual/default.aspx”>Home</a>  without having to use an ASP.NET control like Hyperlink or Image or having to constantly use: <img src=”<%= ResolveUrl(“~/images/home.gif”) %>” /> in MVC applications (which frankly is one of the most annoying things about MVC especially given the path hell that extension-less and endpoint-less URLs impose). I can’t take credit for this idea. While discussing the Response.Filter issues on Twitter a hint from Dylan Beattie who pointed me at one of his examples which does something similar. I thought the idea was cool enough to use an example for future demos of Response.Filter functionality in ASP.NET next I time I do the Modules and Handlers talk (which was great fun BTW). How practical this is is debatable however since there’s definitely some overhead to using a Response.Filter in general and especially on one that caches the output and the re-writes it later. Make sure to test for performance anytime you use Response.Filter hookup and make sure it' doesn’t end up killing perf on you. You’ve been warned :-}. How does ResponseFilterStream work? The big win of this implementation IMHO is that it’s a reusable  component – so for implementation there’s no new class, no subclassing – you simply attach to an event to implement an event handler method with a straight forward signature to retrieve the stream or string you’re interested in. The implementation is based on a subclass of Stream as is required in order to handle the Response.Filter requirements. What’s different than other implementations I’ve seen in various places is that it supports capturing output as a whole to allow retrieving the full response output for capture or modification. The exception are the TransformWrite and TransformWrite events which operate only active chunk of data written by the Response. For captured output, the Write() method captures output into an internal MemoryStream that is cached until writing is complete. So Write() is called when ASP.NET writes to the Response stream, but the filter doesn’t pass on the Write immediately to the filter’s internal stream. The data is cached and only when the Flush() method is called to finalize the Stream’s output do we actually send the cached stream off for transformation (if the events are hooked up) and THEN finally write out the returned content in one big chunk. Here’s the implementation of ResponseFilterStream: /// <summary> /// A semi-generic Stream implementation for Response.Filter with /// an event interface for handling Content transformations via /// Stream or String. /// <remarks> /// Use with care for large output as this implementation copies /// the output into a memory stream and so increases memory usage. /// </remarks> /// </summary> public class ResponseFilterStream : Stream { /// <summary> /// The original stream /// </summary> Stream _stream; /// <summary> /// Current position in the original stream /// </summary> long _position; /// <summary> /// Stream that original content is read into /// and then passed to TransformStream function /// </summary> MemoryStream _cacheStream = new MemoryStream(5000); /// <summary> /// Internal pointer that that keeps track of the size /// of the cacheStream /// </summary> int _cachePointer = 0; /// <summary> /// /// </summary> /// <param name="responseStream"></param> public ResponseFilterStream(Stream responseStream) { _stream = responseStream; } /// <summary> /// Determines whether the stream is captured /// </summary> private bool IsCaptured { get { if (CaptureStream != null || CaptureString != null || TransformStream != null || TransformString != null) return true; return false; } } /// <summary> /// Determines whether the Write method is outputting data immediately /// or delaying output until Flush() is fired. /// </summary> private bool IsOutputDelayed { get { if (TransformStream != null || TransformString != null) return true; return false; } } /// <summary> /// Event that captures Response output and makes it available /// as a MemoryStream instance. Output is captured but won't /// affect Response output. /// </summary> public event Action<MemoryStream> CaptureStream; /// <summary> /// Event that captures Response output and makes it available /// as a string. Output is captured but won't affect Response output. /// </summary> public event Action<string> CaptureString; /// <summary> /// Event that allows you transform the stream as each chunk of /// the output is written in the Write() operation of the stream. /// This means that that it's possible/likely that the input /// buffer will not contain the full response output but only /// one of potentially many chunks. /// /// This event is called as part of the filter stream's Write() /// operation. /// </summary> public event Func<byte[], byte[]> TransformWrite; /// <summary> /// Event that allows you to transform the response stream as /// each chunk of bytep[] output is written during the stream's write /// operation. This means it's possibly/likely that the string /// passed to the handler only contains a portion of the full /// output. Typical buffer chunks are around 16k a piece. /// /// This event is called as part of the stream's Write operation. /// </summary> public event Func<string, string> TransformWriteString; /// <summary> /// This event allows capturing and transformation of the entire /// output stream by caching all write operations and delaying final /// response output until Flush() is called on the stream. /// </summary> public event Func<MemoryStream, MemoryStream> TransformStream; /// <summary> /// Event that can be hooked up to handle Response.Filter /// Transformation. Passed a string that you can modify and /// return back as a return value. The modified content /// will become the final output. /// </summary> public event Func<string, string> TransformString; protected virtual void OnCaptureStream(MemoryStream ms) { if (CaptureStream != null) CaptureStream(ms); } private void OnCaptureStringInternal(MemoryStream ms) { if (CaptureString != null) { string content = HttpContext.Current.Response.ContentEncoding.GetString(ms.ToArray()); OnCaptureString(content); } } protected virtual void OnCaptureString(string output) { if (CaptureString != null) CaptureString(output); } protected virtual byte[] OnTransformWrite(byte[] buffer) { if (TransformWrite != null) return TransformWrite(buffer); return buffer; } private byte[] OnTransformWriteStringInternal(byte[] buffer) { Encoding encoding = HttpContext.Current.Response.ContentEncoding; string output = OnTransformWriteString(encoding.GetString(buffer)); return encoding.GetBytes(output); } private string OnTransformWriteString(string value) { if (TransformWriteString != null) return TransformWriteString(value); return value; } protected virtual MemoryStream OnTransformCompleteStream(MemoryStream ms) { if (TransformStream != null) return TransformStream(ms); return ms; } /// <summary> /// Allows transforming of strings /// /// Note this handler is internal and not meant to be overridden /// as the TransformString Event has to be hooked up in order /// for this handler to even fire to avoid the overhead of string /// conversion on every pass through. /// </summary> /// <param name="responseText"></param> /// <returns></returns> private string OnTransformCompleteString(string responseText) { if (TransformString != null) TransformString(responseText); return responseText; } /// <summary> /// Wrapper method form OnTransformString that handles /// stream to string and vice versa conversions /// </summary> /// <param name="ms"></param> /// <returns></returns> internal MemoryStream OnTransformCompleteStringInternal(MemoryStream ms) { if (TransformString == null) return ms; //string content = ms.GetAsString(); string content = HttpContext.Current.Response.ContentEncoding.GetString(ms.ToArray()); content = TransformString(content); byte[] buffer = HttpContext.Current.Response.ContentEncoding.GetBytes(content); ms = new MemoryStream(); ms.Write(buffer, 0, buffer.Length); //ms.WriteString(content); return ms; } /// <summary> /// /// </summary> public override bool CanRead { get { return true; } } public override bool CanSeek { get { return true; } } /// <summary> /// /// </summary> public override bool CanWrite { get { return true; } } /// <summary> /// /// </summary> public override long Length { get { return 0; } } /// <summary> /// /// </summary> public override long Position { get { return _position; } set { _position = value; } } /// <summary> /// /// </summary> /// <param name="offset"></param> /// <param name="direction"></param> /// <returns></returns> public override long Seek(long offset, System.IO.SeekOrigin direction) { return _stream.Seek(offset, direction); } /// <summary> /// /// </summary> /// <param name="length"></param> public override void SetLength(long length) { _stream.SetLength(length); } /// <summary> /// /// </summary> public override void Close() { _stream.Close(); } /// <summary> /// Override flush by writing out the cached stream data /// </summary> public override void Flush() { if (IsCaptured && _cacheStream.Length > 0) { // Check for transform implementations _cacheStream = OnTransformCompleteStream(_cacheStream); _cacheStream = OnTransformCompleteStringInternal(_cacheStream); OnCaptureStream(_cacheStream); OnCaptureStringInternal(_cacheStream); // write the stream back out if output was delayed if (IsOutputDelayed) _stream.Write(_cacheStream.ToArray(), 0, (int)_cacheStream.Length); // Clear the cache once we've written it out _cacheStream.SetLength(0); } // default flush behavior _stream.Flush(); } /// <summary> /// /// </summary> /// <param name="buffer"></param> /// <param name="offset"></param> /// <param name="count"></param> /// <returns></returns> public override int Read(byte[] buffer, int offset, int count) { return _stream.Read(buffer, offset, count); } /// <summary> /// Overriden to capture output written by ASP.NET and captured /// into a cached stream that is written out later when Flush() /// is called. /// </summary> /// <param name="buffer"></param> /// <param name="offset"></param> /// <param name="count"></param> public override void Write(byte[] buffer, int offset, int count) { if ( IsCaptured ) { // copy to holding buffer only - we'll write out later _cacheStream.Write(buffer, 0, count); _cachePointer += count; } // just transform this buffer if (TransformWrite != null) buffer = OnTransformWrite(buffer); if (TransformWriteString != null) buffer = OnTransformWriteStringInternal(buffer); if (!IsOutputDelayed) _stream.Write(buffer, offset, buffer.Length); } } The key features are the events and corresponding OnXXX methods that handle the event hookups, and the Write() and Flush() methods of the stream implementation. All the rest of the members tend to be plain jane passthrough stream implementation code without much consequence. I do love the way Action<t> and Func<T> make it so easy to create the event signatures for the various events – sweet. A few Things to consider Performance Response.Filter is not great for performance in general as it adds another layer of indirection to the ASP.NET output pipeline, and this implementation in particular adds a memory hit as it basically duplicates the response output into the cached memory stream which is necessary since you may have to look at the entire response. If you have large pages in particular this can cause potentially serious memory pressure in your server application. So be careful of wholesale adoption of this (or other) Response.Filters. Make sure to do some performance testing to ensure it’s not killing your app’s performance. Response.Filter works everywhere A few questions came up in comments and discussion as to capturing ALL output hitting the site and – yes you can definitely do that by assigning a Response.Filter inside of a module. If you do this however you’ll want to be very careful and decide which content you actually want to capture especially in IIS 7 which passes ALL content – including static images/CSS etc. through the ASP.NET pipeline. So it is important to filter only on what you’re looking for – like the page extension or maybe more effectively the Response.ContentType. Response.Filter Chaining Originally I thought that filter chaining doesn’t work at all due to a bug in the stream implementation code. But it’s quite possible to assign multiple filters to the Response.Filter property. So the following actually works to both compress the output and apply the transformed content: WebUtils.GZipEncodePage(); ResponseFilterStream filter = new ResponseFilterStream(Response.Filter); filter.TransformString += filter_TransformString; Response.Filter = filter; However the following does not work resulting in invalid content encoding errors: ResponseFilterStream filter = new ResponseFilterStream(Response.Filter); filter.TransformString += filter_TransformString; Response.Filter = filter; WebUtils.GZipEncodePage(); In other words multiple Response filters can work together but it depends entirely on the implementation whether they can be chained or in which order they can be chained. In this case running the GZip/Deflate stream filters apparently relies on the original content length of the output and chokes when the content is modified. But if attaching the compression first it works fine as unintuitive as that may seem. Resources Download example code Capture Output from ASP.NET Pages © Rick Strahl, West Wind Technologies, 2005-2010Posted in ASP.NET  

    Read the article

  • Google chrome cannot be installed

    - by Zxy
    I downloaded latest version of google chrome and then tried to install it. However it gave me errors. I searched through the net and noticed that most of the people's problem solved when they installed missing dependecies. Therefore I tried to install them too but seems like it does not work. zero@ubuntu:~/Downloads$ sudo apt-get install -f Reading package lists... Done Building dependency tree Reading state information... Done Correcting dependencies... Done The following packages will be REMOVED: google-chrome-stable:i386 0 upgraded, 0 newly installed, 1 to remove and 23 not upgraded. 1 not fully installed or removed. After this operation, 116 MB disk space will be freed. Do you want to continue [Y/n]? Y (Reading database ... 169296 files and directories currently installed.) Removing google-chrome-stable:i386 ... Processing triggers for man-db ... Processing triggers for desktop-file-utils ... Processing triggers for bamfdaemon ... Rebuilding /usr/share/applications/bamf.index... Processing triggers for gnome-menus ... zero@ubuntu:~/Downloads$ sudo dpkg -i google-chrome-stable_current_i386.deb Selecting previously unselected package google-chrome-stable:i386. (Reading database ... 169201 files and directories currently installed.) Unpacking google-chrome-stable:i386 (from google-chrome-stable_current_i386.deb) ... dpkg: dependency problems prevent configuration of google-chrome-stable:i386: google-chrome-stable:i386 depends on xdg-utils (>= 1.0.2). dpkg: error processing google-chrome-stable:i386 (--install): dependency problems - leaving unconfigured Processing triggers for desktop-file-utils ... Processing triggers for bamfdaemon ... Rebuilding /usr/share/applications/bamf.index... Processing triggers for gnome-menus ... Processing triggers for man-db ... Errors were encountered while processing: google-chrome-stable:i386 Could you please help me? Thanks.

    Read the article

  • Hyperlinked, externalized source code documentation

    - by Dave Jarvis
    Why do we still embed natural language descriptions of source code (i.e., the reason why a line of code was written) within the source code, rather than as a separate document? Given the expansive real-estate afforded to modern development environments (high-resolution monitors, dual-monitors, etc.), an IDE could provide semi-lock-step panels wherein source code is visually separated from -- but intrinsically linked to -- its corresponding comments. For example, developers could write source code comments in a hyper-linked markup language (linking to additional software requirements), which would simultaneously prevent documentation from cluttering the source code. What shortcomings would inhibit such a software development mechanism? A mock-up to help clarify the question: When the cursor is at a particular line in the source code (shown with a blue background, above), the documentation that corresponds to the line at the cursor is highlighted (i.e., distinguished from the other details). As noted in the question, the documentation would stay in lock-step with the source code as the cursor jumps through the source code. A hot-key could switch between "documentation mode" and "development mode". Potential advantages include: More source code and more documentation on the screen(s) at once Ability to edit documentation independently of source code (regardless of language?) Write documentation and source code in parallel without merge conflicts Real-time hyperlinked documentation with superior text formatting Quasi-real-time machine translation into different natural languages Every line of code can be clearly linked to a task, business requirement, etc. Documentation could automatically timestamp when each line of code was written (metrics) Dynamic inclusion of architecture diagrams, images to explain relations, etc. Single-source documentation (e.g., tag code snippets for user manual inclusion). Note: The documentation window can be collapsed Workflow for viewing or comparing source files would not be affected How the implementation happens is a detail; the documentation could be: kept at the end of the source file; split into two files by convention (filename.c, filename.c.doc); or fully database-driven By hyperlinked documentation, I mean linking to external sources (such as StackOverflow or Wikipedia) and internal documents (i.e., a wiki on a subdomain that could cross-reference business requirements documentation) and other source files (similar to JavaDocs). Related thread: What's with the aversion to documentation in the industry?

    Read the article

  • Intermittent wired network issues in 14.04

    - by Tommy Brunn
    Since yesterday, my wired network connection has been dropping for a couple of seconds every 30 seconds or so. To my knowledge, I had not made any changes to my network. Output of ifconfig -a: ? ~ ifconfig -a eth0 Link encap:Ethernet HWaddr 6c:f0:49:b9:b1:7f inet addr:192.168.0.16 Bcast:192.168.0.255 Mask:255.255.255.0 inet6 addr: fe80::6ef0:49ff:feb9:b17f/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:11597 errors:0 dropped:0 overruns:0 frame:0 TX packets:9783 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:10101682 (10.1 MB) TX bytes:1215142 (1.2 MB) Interrupt:48 Base address:0x8000 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:65536 Metric:1 RX packets:96691 errors:0 dropped:0 overruns:0 frame:0 TX packets:96691 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:13594355 (13.5 MB) TX bytes:13594355 (13.5 MB) lspci |grep Ethernet: 04:00.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL8111/8168/8411 PCI Express Gigabit Ethernet Controller (rev 03) Pinging my router: ? ~ ping 192.168.0.1 PING 192.168.0.1 (192.168.0.1) 56(84) bytes of data. 64 bytes from 192.168.0.1: icmp_seq=1 ttl=64 time=0.435 ms 64 bytes from 192.168.0.1: icmp_seq=2 ttl=64 time=0.571 ms ping: sendmsg: Network is unreachable ping: sendmsg: Network is unreachable ping: sendmsg: Network is unreachable ping: sendmsg: Network is unreachable ping: sendmsg: Network is unreachable 64 bytes from 192.168.0.1: icmp_seq=8 ttl=64 time=1.03 ms And the output of route: ? ~ route Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface default 192.168.0.1 0.0.0.0 UG 0 0 0 eth0 192.168.0.0 * 255.255.255.0 U 1 0 0 eth0 Some messages from /var/logs/syslog: ? ~ tail -f /var/log/syslog Jun 6 10:37:34 lolbox dhclient: RCV: Advertise message on eth0 from fe80::120d:7fff:fe97:9d54. Jun 6 10:37:34 lolbox dhclient: IA_NA status code NoAddrsAvail. Jun 6 10:37:37 lolbox dnsmasq[1138]: Maximum number of concurrent DNS queries reached (max: 150) Jun 6 10:37:37 lolbox dnsmasq[1362]: Maximum number of concurrent DNS queries reached (max: 150) Jun 6 10:37:39 lolbox dhclient: XMT: Solicit on eth0, interval 8660ms. Jun 6 10:37:39 lolbox dhclient: RCV: Advertise message on eth0 from fe80::120d:7fff:fe97:9d54. Jun 6 10:37:39 lolbox dhclient: IA_NA status code NoAddrsAvail. Jun 6 10:37:47 lolbox dhclient: XMT: Solicit on eth0, interval 16820ms. Jun 6 10:37:47 lolbox dhclient: RCV: Advertise message on eth0 from fe80::120d:7fff:fe97:9d54. Jun 6 10:37:47 lolbox dhclient: IA_NA status code NoAddrsAvail. Jun 6 10:38:04 lolbox dhclient: XMT: Solicit on eth0, interval 34410ms. Jun 6 10:38:04 lolbox dhclient: RCV: Advertise message on eth0 from fe80::120d:7fff:fe97:9d54. Jun 6 10:38:04 lolbox dhclient: IA_NA status code NoAddrsAvail. Jun 6 10:38:16 lolbox NetworkManager[862]: <warn> (eth0): DHCPv6 request timed out. Jun 6 10:38:16 lolbox NetworkManager[862]: <info> (eth0): canceled DHCP transaction, DHCP client pid 13045 Jun 6 10:38:16 lolbox NetworkManager[862]: <info> Activation (eth0) Stage 4 of 5 (IPv6 Configure Timeout) scheduled... Jun 6 10:38:16 lolbox NetworkManager[862]: <info> Activation (eth0) Stage 4 of 5 (IPv6 Configure Timeout) started... Jun 6 10:38:16 lolbox NetworkManager[862]: <info> (eth0): device state change: activated -> failed (reason 'ip-config-unavailable') [100 120 5] Jun 6 10:38:16 lolbox NetworkManager[862]: <info> NetworkManager state is now DISCONNECTED Jun 6 10:38:16 lolbox NetworkManager[862]: <warn> Activation (eth0) failed for connection 'Wired connection 1' Jun 6 10:38:16 lolbox NetworkManager[862]: <info> Activation (eth0) Stage 4 of 5 (IPv6 Configure Timeout) complete. Jun 6 10:38:16 lolbox NetworkManager[862]: <info> (eth0): device state change: failed -> disconnected (reason 'none') [120 30 0] Jun 6 10:38:16 lolbox NetworkManager[862]: <info> (eth0): deactivating device (reason 'none') [0] Jun 6 10:37:34 lolbox whoopsie[1133]: online Jun 6 10:38:16 lolbox whoopsie[1133]: offline Jun 6 10:38:16 lolbox dbus[485]: [system] Activating service name='org.freedesktop.nm_dispatcher' (using servicehelper) Jun 6 10:38:16 lolbox dbus[485]: [system] Successfully activated service 'org.freedesktop.nm_dispatcher' Jun 6 10:38:16 lolbox NetworkManager[862]: <info> (eth0): canceled DHCP transaction, DHCP client pid 13044 Jun 6 10:38:16 lolbox NetworkManager[862]: <warn> DNS: plugin dnsmasq update failed Jun 6 10:38:16 lolbox NetworkManager[862]: <info> Removing DNS information from /sbin/resolvconf Jun 6 10:38:16 lolbox avahi-daemon[619]: Withdrawing address record for fe80::6ef0:49ff:feb9:b17f on eth0. Jun 6 10:38:16 lolbox avahi-daemon[619]: Leaving mDNS multicast group on interface eth0.IPv6 with address fe80::6ef0:49ff:feb9:b17f. Jun 6 10:38:16 lolbox avahi-daemon[619]: Interface eth0.IPv6 no longer relevant for mDNS. Jun 6 10:38:16 lolbox avahi-daemon[619]: Withdrawing address record for 192.168.0.16 on eth0. Jun 6 10:38:16 lolbox avahi-daemon[619]: Leaving mDNS multicast group on interface eth0.IPv4 with address 192.168.0.16. Jun 6 10:38:16 lolbox avahi-daemon[619]: Interface eth0.IPv4 no longer relevant for mDNS. Jun 6 10:38:16 lolbox dnsmasq[1362]: setting upstream servers from DBus Jun 6 10:38:17 lolbox avahi-daemon[619]: Joining mDNS multicast group on interface eth0.IPv6 with address fe80::6ef0:49ff:feb9:b17f. Jun 6 10:38:17 lolbox avahi-daemon[619]: New relevant interface eth0.IPv6 for mDNS. Jun 6 10:38:17 lolbox avahi-daemon[619]: Registering new address record for fe80::6ef0:49ff:feb9:b17f on eth0.*. Jun 6 10:38:18 lolbox dnsmasq[1138]: no servers found in /var/run/dnsmasq/resolv.conf, will retry Jun 6 10:38:18 lolbox NetworkManager[862]: <info> Auto-activating connection 'Wired connection 1'. Jun 6 10:38:18 lolbox NetworkManager[862]: <info> Activation (eth0) starting connection 'Wired connection 1' Jun 6 10:38:18 lolbox NetworkManager[862]: <info> (eth0): device state change: disconnected -> prepare (reason 'none') [30 40 0] Jun 6 10:38:18 lolbox NetworkManager[862]: <info> NetworkManager state is now CONNECTING Jun 6 10:38:18 lolbox NetworkManager[862]: <info> Activation (eth0) Stage 1 of 5 (Device Prepare) scheduled... Jun 6 10:38:18 lolbox NetworkManager[862]: <info> Activation (eth0) Stage 1 of 5 (Device Prepare) started... Jun 6 10:38:18 lolbox NetworkManager[862]: <info> Activation (eth0) Stage 2 of 5 (Device Configure) scheduled... Jun 6 10:38:18 lolbox NetworkManager[862]: <info> Activation (eth0) Stage 1 of 5 (Device Prepare) complete. Jun 6 10:38:18 lolbox NetworkManager[862]: <info> Activation (eth0) Stage 2 of 5 (Device Configure) starting... Jun 6 10:38:18 lolbox NetworkManager[862]: <info> (eth0): device state change: prepare -> config (reason 'none') [40 50 0] Jun 6 10:38:18 lolbox NetworkManager[862]: <info> Activation (eth0) Stage 2 of 5 (Device Configure) successful. Jun 6 10:38:18 lolbox NetworkManager[862]: <info> Activation (eth0) Stage 3 of 5 (IP Configure Start) scheduled. Jun 6 10:38:18 lolbox NetworkManager[862]: <info> Activation (eth0) Stage 2 of 5 (Device Configure) complete. Jun 6 10:38:18 lolbox NetworkManager[862]: <info> Activation (eth0) Stage 3 of 5 (IP Configure Start) started... Jun 6 10:38:18 lolbox NetworkManager[862]: <info> (eth0): device state change: config -> ip-config (reason 'none') [50 70 0] Jun 6 10:38:18 lolbox NetworkManager[862]: <info> Activation (eth0) Beginning DHCPv4 transaction (timeout in 45 seconds) Jun 6 10:38:18 lolbox NetworkManager[862]: <info> dhclient started with pid 13160 Jun 6 10:38:18 lolbox NetworkManager[862]: <info> Activation (eth0) Beginning DHCPv6 transaction (timeout in 45 seconds) Jun 6 10:38:18 lolbox NetworkManager[862]: <info> dhclient started with pid 13161 Jun 6 10:38:18 lolbox NetworkManager[862]: <info> Activation (eth0) Stage 3 of 5 (IP Configure Start) complete. Jun 6 10:38:18 lolbox avahi-daemon[619]: Withdrawing address record for fe80::6ef0:49ff:feb9:b17f on eth0. Jun 6 10:38:18 lolbox avahi-daemon[619]: Leaving mDNS multicast group on interface eth0.IPv6 with address fe80::6ef0:49ff:feb9:b17f. Jun 6 10:38:18 lolbox avahi-daemon[619]: Interface eth0.IPv6 no longer relevant for mDNS. Jun 6 10:38:18 lolbox dhclient: Internet Systems Consortium DHCP Client 4.2.4 Jun 6 10:38:18 lolbox dhclient: Copyright 2004-2012 Internet Systems Consortium. Jun 6 10:38:18 lolbox dhclient: All rights reserved. Jun 6 10:38:18 lolbox dhclient: For info, please visit https://www.isc.org/software/dhcp/ Jun 6 10:38:18 lolbox dhclient: Jun 6 10:38:19 lolbox dhclient: Internet Systems Consortium DHCP Client 4.2.4 Jun 6 10:38:19 lolbox dhclient: Copyright 2004-2012 Internet Systems Consortium. Jun 6 10:38:19 lolbox dhclient: All rights reserved. Jun 6 10:38:19 lolbox dhclient: For info, please visit https://www.isc.org/software/dhcp/ Jun 6 10:38:19 lolbox dhclient: Jun 6 10:38:19 lolbox NetworkManager[862]: <info> (eth0): DHCPv4 state changed nbi -> preinit Jun 6 10:38:19 lolbox dhclient: Bound to *:546 Jun 6 10:38:19 lolbox dhclient: Listening on Socket/eth0 Jun 6 10:38:19 lolbox dhclient: Sending on Socket/eth0 Jun 6 10:38:19 lolbox NetworkManager[862]: <info> (eth0): DHCPv6 state changed nbi -> preinit6 Jun 6 10:38:19 lolbox dhclient: Listening on LPF/eth0/6c:f0:49:b9:b1:7f Jun 6 10:38:19 lolbox dhclient: Sending on LPF/eth0/6c:f0:49:b9:b1:7f Jun 6 10:38:19 lolbox dhclient: Sending on Socket/fallback Jun 6 10:38:19 lolbox dhclient: DHCPREQUEST of 192.168.0.16 on eth0 to 255.255.255.255 port 67 (xid=0x3fc9376d) Jun 6 10:38:19 lolbox dhclient: XMT: Solicit on eth0, interval 1020ms. Jun 6 10:38:19 lolbox dhclient: send_packet6: Cannot assign requested address Jun 6 10:38:19 lolbox dhclient: dhc6: send_packet6() sent -1 of 77 bytes Jun 6 10:38:20 lolbox dhclient: DHCPACK of 192.168.0.16 from 192.168.0.1 Jun 6 10:38:20 lolbox dhclient: bound to 192.168.0.16 -- renewal in 41481 seconds. Jun 6 10:38:20 lolbox NetworkManager[862]: <info> (eth0): DHCPv4 state changed preinit -> reboot Jun 6 10:38:20 lolbox NetworkManager[862]: <info> address 192.168.0.16 Jun 6 10:38:20 lolbox NetworkManager[862]: <info> prefix 24 (255.255.255.0) Jun 6 10:38:20 lolbox NetworkManager[862]: <info> gateway 192.168.0.1 Jun 6 10:38:20 lolbox NetworkManager[862]: <info> nameserver '83.255.245.11' Jun 6 10:38:20 lolbox NetworkManager[862]: <info> nameserver '193.150.193.150' Jun 6 10:38:20 lolbox NetworkManager[862]: <info> Activation (eth0) Stage 5 of 5 (IPv4 Configure Commit) scheduled... Jun 6 10:38:20 lolbox NetworkManager[862]: <info> Activation (eth0) Stage 5 of 5 (IPv4 Commit) started... Jun 6 10:38:20 lolbox avahi-daemon[619]: Joining mDNS multicast group on interface eth0.IPv4 with address 192.168.0.16. Jun 6 10:38:20 lolbox avahi-daemon[619]: New relevant interface eth0.IPv4 for mDNS. Jun 6 10:38:20 lolbox avahi-daemon[619]: Registering new address record for 192.168.0.16 on eth0.IPv4. Jun 6 10:38:20 lolbox dhclient: XMT: Solicit on eth0, interval 2110ms. Jun 6 10:38:20 lolbox dhclient: send_packet6: Cannot assign requested address Jun 6 10:38:20 lolbox dhclient: dhc6: send_packet6() sent -1 of 77 bytes Jun 6 10:38:20 lolbox avahi-daemon[619]: Joining mDNS multicast group on interface eth0.IPv6 with address fe80::6ef0:49ff:feb9:b17f. Jun 6 10:38:20 lolbox avahi-daemon[619]: New relevant interface eth0.IPv6 for mDNS. Jun 6 10:38:20 lolbox avahi-daemon[619]: Registering new address record for fe80::6ef0:49ff:feb9:b17f on eth0.*. Jun 6 10:38:21 lolbox NetworkManager[862]: <info> (eth0): device state change: ip-config -> secondaries (reason 'none') [70 90 0] Jun 6 10:38:21 lolbox NetworkManager[862]: <info> Activation (eth0) Stage 5 of 5 (IPv4 Commit) complete. Jun 6 10:38:21 lolbox NetworkManager[862]: <info> (eth0): device state change: secondaries -> activated (reason 'none') [90 100 0] Jun 6 10:38:21 lolbox NetworkManager[862]: <info> NetworkManager state is now CONNECTED_GLOBAL Jun 6 10:38:21 lolbox NetworkManager[862]: <info> Policy set 'Wired connection 1' (eth0) as default for IPv4 routing and DNS. Jun 6 10:38:21 lolbox NetworkManager[862]: <info> Writing DNS information to /sbin/resolvconf Jun 6 10:38:21 lolbox dnsmasq[1362]: setting upstream servers from DBus Jun 6 10:38:21 lolbox dnsmasq[1362]: using nameserver 127.0.0.1#53 Jun 6 10:38:21 lolbox dnsmasq[1362]: using nameserver 193.150.193.150#53 Jun 6 10:38:21 lolbox dnsmasq[1362]: using nameserver 83.255.245.11#53 Jun 6 10:38:21 lolbox NetworkManager[862]: <info> Activation (eth0) successful, device activated. Jun 6 10:38:21 lolbox whoopsie[1133]: message repeated 2 times: [ offline] Jun 6 10:38:21 lolbox whoopsie[1133]: online Jun 6 10:38:21 lolbox ntpdate[13217]: Can't find host ntp.ubuntu.com: Name or service not known (-2) Jun 6 10:38:21 lolbox ntpdate[13217]: no servers can be used, exiting Jun 6 10:38:22 lolbox dnsmasq[1138]: reading /var/run/dnsmasq/resolv.conf Jun 6 10:38:22 lolbox dnsmasq[1138]: using nameserver 127.0.1.1#53 Jun 6 10:38:22 lolbox dhclient: XMT: Solicit on eth0, interval 4080ms. Jun 6 10:38:22 lolbox dhclient: RCV: Advertise message on eth0 from fe80::120d:7fff:fe97:9d54. Jun 6 10:38:22 lolbox dhclient: IA_NA status code NoAddrsAvail. Jun 6 10:38:26 lolbox dhclient: XMT: Solicit on eth0, interval 8450ms. Jun 6 10:38:26 lolbox dhclient: RCV: Advertise message on eth0 from fe80::120d:7fff:fe97:9d54. Jun 6 10:38:26 lolbox dhclient: IA_NA status code NoAddrsAvail. Jun 6 10:38:35 lolbox dhclient: XMT: Solicit on eth0, interval 16630ms. Jun 6 10:38:35 lolbox dhclient: RCV: Advertise message on eth0 from fe80::120d:7fff:fe97:9d54. Jun 6 10:38:35 lolbox dhclient: IA_NA status code NoAddrsAvail. Jun 6 10:38:51 lolbox dhclient: XMT: Solicit on eth0, interval 34860ms. Jun 6 10:38:51 lolbox dhclient: RCV: Advertise message on eth0 from fe80::120d:7fff:fe97:9d54. Jun 6 10:38:51 lolbox dhclient: IA_NA status code NoAddrsAvail. Jun 6 10:38:58 lolbox dnsmasq[1138]: Maximum number of concurrent DNS queries reached (max: 150) Jun 6 10:38:58 lolbox dnsmasq[1362]: Maximum number of concurrent DNS queries reached (max: 150) Jun 6 10:39:04 lolbox NetworkManager[862]: <warn> (eth0): DHCPv6 request timed out. Jun 6 10:39:04 lolbox NetworkManager[862]: <info> (eth0): canceled DHCP transaction, DHCP client pid 13161 Jun 6 10:39:04 lolbox NetworkManager[862]: <info> Activation (eth0) Stage 4 of 5 (IPv6 Configure Timeout) scheduled... Jun 6 10:39:04 lolbox NetworkManager[862]: <info> Activation (eth0) Stage 4 of 5 (IPv6 Configure Timeout) started... Jun 6 10:39:04 lolbox NetworkManager[862]: <info> (eth0): device state change: activated -> failed (reason 'ip-config-unavailable') [100 120 5] Jun 6 10:39:04 lolbox NetworkManager[862]: <info> NetworkManager state is now DISCONNECTED Jun 6 10:39:04 lolbox NetworkManager[862]: <warn> Activation (eth0) failed for connection 'Wired connection 1' Jun 6 10:38:22 lolbox whoopsie[1133]: online Jun 6 10:39:04 lolbox whoopsie[1133]: offline Jun 6 10:39:04 lolbox NetworkManager[862]: <info> Activation (eth0) Stage 4 of 5 (IPv6 Configure Timeout) complete. Jun 6 10:39:04 lolbox dbus[485]: [system] Activating service name='org.freedesktop.nm_dispatcher' (using servicehelper) Jun 6 10:39:04 lolbox NetworkManager[862]: <info> (eth0): device state change: failed -> disconnected (reason 'none') [120 30 0] Jun 6 10:39:04 lolbox NetworkManager[862]: <info> (eth0): deactivating device (reason 'none') [0] Jun 6 10:39:04 lolbox dbus[485]: [system] Successfully activated service 'org.freedesktop.nm_dispatcher' Jun 6 10:39:04 lolbox NetworkManager[862]: <info> (eth0): canceled DHCP transaction, DHCP client pid 13160 Jun 6 10:39:04 lolbox avahi-daemon[619]: Withdrawing address record for fe80::6ef0:49ff:feb9:b17f on eth0. Jun 6 10:39:04 lolbox avahi-daemon[619]: Leaving mDNS multicast group on interface eth0.IPv6 with address fe80::6ef0:49ff:feb9:b17f. Jun 6 10:39:04 lolbox avahi-daemon[619]: Interface eth0.IPv6 no longer relevant for mDNS. Jun 6 10:39:04 lolbox avahi-daemon[619]: Withdrawing address record for 192.168.0.16 on eth0. Jun 6 10:39:04 lolbox avahi-daemon[619]: Leaving mDNS multicast group on interface eth0.IPv4 with address 192.168.0.16. Jun 6 10:39:04 lolbox avahi-daemon[619]: Interface eth0.IPv4 no longer relevant for mDNS. Jun 6 10:39:04 lolbox NetworkManager[862]: <warn> DNS: plugin dnsmasq update failed Jun 6 10:39:04 lolbox NetworkManager[862]: <info> Removing DNS information from /sbin/resolvconf Jun 6 10:39:04 lolbox dnsmasq[1362]: setting upstream servers from DBus Jun 6 10:39:05 lolbox avahi-daemon[619]: Joining mDNS multicast group on interface eth0.IPv6 with address fe80::6ef0:49ff:feb9:b17f. Jun 6 10:39:05 lolbox avahi-daemon[619]: New relevant interface eth0.IPv6 for mDNS. Jun 6 10:39:05 lolbox avahi-daemon[619]: Registering new address record for fe80::6ef0:49ff:feb9:b17f on eth0.*. Jun 6 10:39:06 lolbox dnsmasq[1138]: no servers found in /var/run/dnsmasq/resolv.conf, will retry Jun 6 10:39:07 lolbox NetworkManager[862]: <info> Auto-activating connection 'Wired connection 1'. Jun 6 10:39:07 lolbox NetworkManager[862]: <info> Activation (eth0) starting connection 'Wired connection 1' Jun 6 10:39:07 lolbox NetworkManager[862]: <info> (eth0): device state change: disconnected -> prepare (reason 'none') [30 40 0] Jun 6 10:39:07 lolbox NetworkManager[862]: <info> NetworkManager state is now CONNECTING Jun 6 10:39:07 lolbox NetworkManager[862]: <info> Activation (eth0) Stage 1 of 5 (Device Prepare) scheduled... Jun 6 10:39:07 lolbox NetworkManager[862]: <info> Activation (eth0) Stage 1 of 5 (Device Prepare) started... Jun 6 10:39:07 lolbox NetworkManager[862]: <info> Activation (eth0) Stage 2 of 5 (Device Configure) scheduled... Jun 6 10:39:07 lolbox NetworkManager[862]: <info> Activation (eth0) Stage 1 of 5 (Device Prepare) complete. Jun 6 10:39:07 lolbox NetworkManager[862]: <info> Activation (eth0) Stage 2 of 5 (Device Configure) starting... Jun 6 10:39:07 lolbox NetworkManager[862]: <info> (eth0): device state change: prepare -> config (reason 'none') [40 50 0] Jun 6 10:39:07 lolbox NetworkManager[862]: <info> Activation (eth0) Stage 2 of 5 (Device Configure) successful. Jun 6 10:39:07 lolbox NetworkManager[862]: <info> Activation (eth0) Stage 3 of 5 (IP Configure Start) scheduled. Jun 6 10:39:07 lolbox NetworkManager[862]: <info> Activation (eth0) Stage 2 of 5 (Device Configure) complete. Jun 6 10:39:07 lolbox NetworkManager[862]: <info> Activation (eth0) Stage 3 of 5 (IP Configure Start) started... Jun 6 10:39:07 lolbox NetworkManager[862]: <info> (eth0): device state change: config -> ip-config (reason 'none') [50 70 0] Jun 6 10:39:07 lolbox NetworkManager[862]: <info> Activation (eth0) Beginning DHCPv4 transaction (timeout in 45 seconds) Jun 6 10:39:07 lolbox NetworkManager[862]: <info> dhclient started with pid 13270 Jun 6 10:39:07 lolbox NetworkManager[862]: <info> Activation (eth0) Beginning DHCPv6 transaction (timeout in 45 seconds) Jun 6 10:39:07 lolbox NetworkManager[862]: <info> dhclient started with pid 13271 Jun 6 10:39:07 lolbox NetworkManager[862]: <info> Activation (eth0) Stage 3 of 5 (IP Configure Start) complete. Jun 6 10:39:07 lolbox avahi-daemon[619]: Withdrawing address record for fe80::6ef0:49ff:feb9:b17f on eth0. Jun 6 10:39:07 lolbox avahi-daemon[619]: Leaving mDNS multicast group on interface eth0.IPv6 with address fe80::6ef0:49ff:feb9:b17f. Jun 6 10:39:07 lolbox avahi-daemon[619]: Interface eth0.IPv6 no longer relevant for mDNS. Jun 6 10:39:07 lolbox dhclient: Internet Systems Consortium DHCP Client 4.2.4 Jun 6 10:39:07 lolbox dhclient: Copyright 2004-2012 Internet Systems Consortium. Jun 6 10:39:07 lolbox dhclient: All rights reserved. Jun 6 10:39:07 lolbox dhclient: For info, please visit https://www.isc.org/software/dhcp/ Jun 6 10:39:07 lolbox dhclient: Jun 6 10:39:08 lolbox dhclient: Internet Systems Consortium DHCP Client 4.2.4 Jun 6 10:39:08 lolbox dhclient: Copyright 2004-2012 Internet Systems Consortium. Jun 6 10:39:08 lolbox dhclient: All rights reserved. Jun 6 10:39:08 lolbox dhclient: For info, please visit https://www.isc.org/software/dhcp/ Jun 6 10:39:08 lolbox dhclient: Jun 6 10:39:08 lolbox dhclient: Bound to *:546 Jun 6 10:39:08 lolbox dhclient: Listening on Socket/eth0 Jun 6 10:39:08 lolbox dhclient: Sending on Socket/eth0 Jun 6 10:39:08 lolbox kernel: [ 1446.098590] type=1400 audit(1402043948.002:75): apparmor="DENIED" operation="signal" profile="/usr/lib/NetworkManager/nm-dhcp-client.action" pid=13273 comm="nm-dhcp-client." requested_mask="send" denied_mask="send" signal=term peer="/sbin/dhclient" Jun 6 10:39:08 lolbox kernel: [ 1446.098599] type=1400 audit(1402043948.002:76): apparmor="DENIED" operation="signal" profile="/sbin/dhclient" pid=13273 comm="nm-dhcp-client." requested_mask="receive" denied_mask="receive" signal=term peer="/usr/lib/NetworkManager/nm-dhcp-client.action" Jun 6 10:39:08 lolbox NetworkManager[862]: <info> (eth0): DHCPv4 state changed nbi -> preinit Jun 6 10:39:08 lolbox dhclient: Listening on LPF/eth0/6c:f0:49:b9:b1:7f Jun 6 10:39:08 lolbox dhclient: Sending on LPF/eth0/6c:f0:49:b9:b1:7f Jun 6 10:39:08 lolbox dhclient: Sending on Socket/fallback Jun 6 10:39:08 lolbox dhclient: DHCPREQUEST of 192.168.0.16 on eth0 to 255.255.255.255 port 67 (xid=0x3e0183b9) Jun 6 10:39:08 lolbox dhclient: XMT: Solicit on eth0, interval 1050ms. Jun 6 10:39:08 lolbox dhclient: send_packet6: Cannot assign requested address Jun 6 10:39:08 lolbox dhclient: dhc6: send_packet6() sent -1 of 77 bytes Jun 6 10:39:09 lolbox dhclient: DHCPACK of 192.168.0.16 from 192.168.0.1 Jun 6 10:39:09 lolbox dhclient: bound to 192.168.0.16 -- renewal in 35498 seconds. Jun 6 10:39:09 lolbox NetworkManager[862]: <info> (eth0): DHCPv4 state changed preinit -> reboot Jun 6 10:39:09 lolbox NetworkManager[862]: <info> address 192.168.0.16 Jun 6 10:39:09 lolbox NetworkManager[862]: <info> prefix 24 (255.255.255.0) Jun 6 10:39:09 lolbox NetworkManager[862]: <info> gateway 192.168.0.1 Jun 6 10:39:09 lolbox NetworkManager[862]: <info> nameserver '83.255.245.11' Jun 6 10:39:09 lolbox NetworkManager[862]: <info> nameserver '193.150.193.150' Jun 6 10:39:09 lolbox NetworkManager[862]: <info> Activation (eth0) Stage 5 of 5 (IPv4 Configure Commit) scheduled... Jun 6 10:39:09 lolbox NetworkManager[862]: <info> Activation (eth0) Stage 5 of 5 (IPv4 Commit) started... Jun 6 10:39:09 lolbox avahi-daemon[619]: Joining mDNS multicast group on interface eth0.IPv4 with address 192.168.0.16. Jun 6 10:39:09 lolbox avahi-daemon[619]: New relevant interface eth0.IPv4 for mDNS. Jun 6 10:39:09 lolbox avahi-daemon[619]: Registering new address record for 192.168.0.16 on eth0.IPv4. Jun 6 10:39:09 lolbox avahi-daemon[619]: Joining mDNS multicast group on interface eth0.IPv6 with address fe80::6ef0:49ff:feb9:b17f. Jun 6 10:39:09 lolbox avahi-daemon[619]: New relevant interface eth0.IPv6 for mDNS. Jun 6 10:39:09 lolbox avahi-daemon[619]: Registering new address record for fe80::6ef0:49ff:feb9:b17f on eth0.*. Jun 6 10:39:10 lolbox dhclient: XMT: Solicit on eth0, interval 2180ms. Jun 6 10:39:10 lolbox dhclient: RCV: Advertise message on eth0 from fe80::120d:7fff:fe97:9d54. Jun 6 10:39:10 lolbox dhclient: IA_NA status code NoAddrsAvail. Jun 6 10:39:10 lolbox NetworkManager[862]: <info> (eth0): device state change: ip-config -> secondaries (reason 'none') [70 90 0] Jun 6 10:39:10 lolbox NetworkManager[862]: <info> Activation (eth0) Stage 5 of 5 (IPv4 Commit) complete. Jun 6 10:39:10 lolbox NetworkManager[862]: <info> (eth0): device state change: secondaries -> activated (reason 'none') [90 100 0] Jun 6 10:39:10 lolbox NetworkManager[862]: <info> NetworkManager state is now CONNECTED_GLOBAL Jun 6 10:39:10 lolbox NetworkManager[862]: <info> Policy set 'Wired connection 1' (eth0) as default for IPv4 routing and DNS. Jun 6 10:39:10 lolbox NetworkManager[862]: <info> Writing DNS information to /sbin/resolvconf Jun 6 10:39:10 lolbox dnsmasq[1362]: setting upstream servers from DBus Jun 6 10:39:10 lolbox dnsmasq[1362]: using nameserver 127.0.0.1#53 Jun 6 10:39:10 lolbox dnsmasq[1362]: using nameserver 193.150.193.150#53 Jun 6 10:39:10 lolbox dnsmasq[1362]: using nameserver 83.255.245.11#53 Jun 6 10:39:10 lolbox NetworkManager[862]: <info> Activation (eth0) successful, device activated. Jun 6 10:39:10 lolbox whoopsie[1133]: message repeated 2 times: [ offline] Jun 6 10:39:10 lolbox whoopsie[1133]: online Jun 6 10:39:10 lolbox ntpdate[13339]: Can't find host ntp.ubuntu.com: Name or service not known (-2) Jun 6 10:39:10 lolbox ntpdate[13339]: no servers can be used, exiting Jun 6 10:39:11 lolbox dnsmasq[1138]: reading /var/run/dnsmasq/resolv.conf Jun 6 10:39:11 lolbox dnsmasq[1138]: using nameserver 127.0.1.1#53 Jun 6 10:39:12 lolbox dhclient: XMT: Solicit on eth0, interval 4350ms. Jun 6 10:39:12 lolbox dhclient: RCV: Advertise message on eth0 from fe80::120d:7fff:fe97:9d54. Jun 6 10:39:12 lolbox dhclient: IA_NA status code NoAddrsAvail. Jun 6 10:39:16 lolbox dhclient: XMT: Solicit on eth0, interval 8740ms. Jun 6 10:39:16 lolbox dhclient: RCV: Advertise message on eth0 from fe80::120d:7fff:fe97:9d54. Jun 6 10:39:16 lolbox dhclient: IA_NA status code NoAddrsAvail. Jun 6 10:39:17 lolbox dnsmasq[1138]: Maximum number of concurrent DNS queries reached (max: 150) Jun 6 10:39:17 lolbox dnsmasq[1362]: Maximum number of concurrent DNS queries reached (max: 150) Jun 6 10:39:25 lolbox dhclient: XMT: Solicit on eth0, interval 17610ms. Jun 6 10:39:25 lolbox dhclient: RCV: Advertise message on eth0 from fe80::120d:7fff:fe97:9d54. Jun 6 10:39:25 lolbox dhclient: IA_NA status code NoAddrsAvail.

    Read the article

  • Windows Azure: Import/Export Hard Drives, VM ACLs, Web Sockets, Remote Debugging, Continuous Delivery, New Relic, Billing Alerts and More

    - by ScottGu
    Two weeks ago we released a giant set of improvements to Windows Azure, as well as a significant update of the Windows Azure SDK. This morning we released another massive set of enhancements to Windows Azure.  Today’s new capabilities include: Storage: Import/Export Hard Disk Drives to your Storage Accounts HDInsight: General Availability of our Hadoop Service in the cloud Virtual Machines: New VM Gallery, ACL support for VIPs Web Sites: WebSocket and Remote Debugging Support Notification Hubs: Segmented customer push notification support with tag expressions TFS & GIT: Continuous Delivery Support for Web Sites + Cloud Services Developer Analytics: New Relic support for Web Sites + Mobile Services Service Bus: Support for partitioned queues and topics Billing: New Billing Alert Service that sends emails notifications when your bill hits a threshold you define All of these improvements are now available to use immediately (note that some features are still in preview).  Below are more details about them. Storage: Import/Export Hard Disk Drives to Windows Azure I am excited to announce the preview of our new Windows Azure Import/Export Service! The Windows Azure Import/Export Service enables you to move large amounts of on-premises data into and out of your Windows Azure Storage accounts. It does this by enabling you to securely ship hard disk drives directly to our Windows Azure data centers. Once we receive the drives we’ll automatically transfer the data to or from your Windows Azure Storage account.  This enables you to import or export massive amounts of data more quickly and cost effectively (and not be constrained by available network bandwidth). Encrypted Transport Our Import/Export service provides built-in support for BitLocker disk encryption – which enables you to securely encrypt data on the hard drives before you send it, and not have to worry about it being compromised even if the disk is lost/stolen in transit (since the content on the transported hard drives is completely encrypted and you are the only one who has the key to it).  The drive preparation tool we are shipping today makes setting up bitlocker encryption on these hard drives easy. How to Import/Export your first Hard Drive of Data You can read our Getting Started Guide to learn more about how to begin using the import/export service.  You can create import and export jobs via the Windows Azure Management Portal as well as programmatically using our Server Management APIs. It is really easy to create a new import or export job using the Windows Azure Management Portal.  Simply navigate to a Windows Azure storage account, and then click the new Import/Export tab now available within it (note: if you don’t have this tab make sure to sign-up for the Import/Export preview): Then click the “Create Import Job” or “Create Export Job” commands at the bottom of it.  This will launch a wizard that easily walks you through the steps required: For more comprehensive information about Import/Export, refer to Windows Azure Storage team blog.  You can also send questions and comments to the [email protected] email address. We think you’ll find this new service makes it much easier to move data into and out of Windows Azure, and it will dramatically cut down the network bandwidth required when working on large data migration projects.  We hope you like it. HDInsight: 100% Compatible Hadoop Service in the Cloud Last week we announced the general availability release of Windows Azure HDInsight. HDInsight is a 100% compatible Hadoop service that allows you to easily provision and manage Hadoop clusters for big data processing in Windows Azure.  This release is now live in production, backed by an enterprise SLA, supported 24x7 by Microsoft Support, and is ready to use for production scenarios. HDInsight allows you to use Apache Hadoop tools, such as Pig and Hive, to process large amounts of data in Windows Azure Blob Storage. Because data is stored in Windows Azure Blob Storage, you can choose to dynamically create Hadoop clusters only when you need them, and then shut them down when they are no longer required (since you pay only for the time the Hadoop cluster instances are running this provides a super cost effective way to use them).  You can create Hadoop clusters using either the Windows Azure Management Portal (see below) or using our PowerShell and Cross Platform Command line tools: The import/export hard drive support that came out today is a perfect companion service to use with HDInsight – the combination allows you to easily ingest, process and optionally export a limitless amount of data.  We’ve also integrated HDInsight with our Business Intelligence tools, so users can leverage familiar tools like Excel in order to analyze the output of jobs.  You can find out more about how to get started with HDInsight here. Virtual Machines: VM Gallery Enhancements Today’s update of Windows Azure brings with it a new Virtual Machine gallery that you can use to create new VMs in the cloud.  You can launch the gallery by doing New->Compute->Virtual Machine->From Gallery within the Windows Azure Management Portal: The new Virtual Machine Gallery includes some nice enhancements that make it even easier to use: Search: You can now easily search and filter images using the search box in the top-right of the dialog.  For example, simply type “SQL” and we’ll filter to show those images in the gallery that contain that substring. Category Tree-view: Each month we add more built-in VM images to the gallery.  You can continue to browse these using the “All” view within the VM Gallery – or now quickly filter them using the category tree-view on the left-hand side of the dialog.  For example, by selecting “Oracle” in the tree-view you can now quickly filter to see the official Oracle supplied images. MSDN and Supported checkboxes: With today’s update we are also introducing filters that makes it easy to filter out types of images that you may not be interested in. The first checkbox is MSDN: using this filter you can exclude any image that is not part of the Windows Azure benefits for MSDN subscribers (which have highly discounted pricing - you can learn more about the MSDN pricing here). The second checkbox is Supported: this filter will exclude any image that contains prerelease software, so you can feel confident that the software you choose to deploy is fully supported by Windows Azure and our partners. Sort options: We sort gallery images by what we think customers are most interested in, but sometimes you might want to sort using different views. So we’re providing some additional sort options, like “Newest,” to customize the image list for what suits you best. Pricing information: We now provide additional pricing information about images and options on how to cost effectively run them directly within the VM Gallery. The above improvements make it even easier to use the VM Gallery and quickly create launch and run Virtual Machines in the cloud. Virtual Machines: ACL Support for VIPs A few months ago we exposed the ability to configure Access Control Lists (ACLs) for Virtual Machines using Windows PowerShell cmdlets and our Service Management API. With today’s release, you can now configure VM ACLs using the Windows Azure Management Portal as well. You can now do this by clicking the new Manage ACL command in the Endpoints tab of a virtual machine instance: This will enable you to configure an ordered list of permit and deny rules to scope the traffic that can access your VM’s network endpoints. For example, if you were on a virtual network, you could limit RDP access to a Windows Azure virtual machine to only a few computers attached to your enterprise. Or if you weren’t on a virtual network you could alternatively limit traffic from public IPs that can access your workloads: Here is the default behaviors for ACLs in Windows Azure: By default (i.e. no rules specified), all traffic is permitted. When using only Permit rules, all other traffic is denied. When using only Deny rules, all other traffic is permitted. When there is a combination of Permit and Deny rules, all other traffic is denied. Lastly, remember that configuring endpoints does not automatically configure them within the VM if it also has firewall rules enabled at the OS level.  So if you create an endpoint using the Windows Azure Management Portal, Windows PowerShell, or REST API, be sure to also configure your guest VM firewall appropriately as well. Web Sites: Web Sockets Support With today’s release you can now use Web Sockets with Windows Azure Web Sites.  This feature enables you to easily integrate real-time communication scenarios within your web based applications, and is available at no extra charge (it even works with the free tier).  Higher level programming libraries like SignalR and socket.io are also now supported with it. You can enable Web Sockets support on a web site by navigating to the Configure tab of a Web Site, and by toggling Web Sockets support to “on”: Once Web Sockets is enabled you can start to integrate some really cool scenarios into your web applications.  Check out the new SignalR documentation hub on www.asp.net to learn more about some of the awesome scenarios you can do with it. Web Sites: Remote Debugging Support The Windows Azure SDK 2.2 we released two weeks ago introduced remote debugging support for Windows Azure Cloud Services. With today’s Windows Azure release we are extending this remote debugging support to also work with Windows Azure Web Sites. With live, remote debugging support inside of Visual Studio, you are able to have more visibility than ever before into how your code is operating live in Windows Azure. It is now super easy to attach the debugger and quickly see what is going on with your application in the cloud. Remote Debugging of a Windows Azure Web Site using VS 2013 Enabling the remote debugging of a Windows Azure Web Site using VS 2013 is really easy.  Start by opening up your web application’s project within Visual Studio. Then navigate to the “Server Explorer” tab within Visual Studio, and click on the deployed web-site you want to debug that is running within Windows Azure using the Windows Azure->Web Sites node in the Server Explorer.  Then right-click and choose the “Attach Debugger” option on it: When you do this Visual Studio will remotely attach the debugger to the Web Site running within Windows Azure.  The debugger will then stop the web site’s execution when it hits any break points that you have set within your web application’s project inside Visual Studio.  For example, below I set a breakpoint on the “ViewBag.Message” assignment statement within the HomeController of the standard ASP.NET MVC project template.  When I hit refresh on the “About” page of the web site within the browser, the breakpoint was triggered and I am now able to debug the app remotely using Visual Studio: Note above how we can debug variables (including autos/watchlist/etc), as well as use the Immediate and Command Windows. In the debug session above I used the Immediate Window to explore some of the request object state, as well as to dynamically change the ViewBag.Message property.  When we click the the “Continue” button (or press F5) the app will continue execution and the Web Site will render the content back to the browser.  This makes it super easy to debug web apps remotely. Tips for Better Debugging To get the best experience while debugging, we recommend publishing your site using the Debug configuration within Visual Studio’s Web Publish dialog. This will ensure that debug symbol information is uploaded to the Web Site which will enable a richer debug experience within Visual Studio.  You can find this option on the Web Publish dialog on the Settings tab: When you ultimately deploy/run the application in production we recommend using the “Release” configuration setting – the release configuration is memory optimized and will provide the best production performance.  To learn more about diagnosing and debugging Windows Azure Web Sites read our new Troubleshooting Windows Azure Web Sites in Visual Studio guide. Notification Hubs: Segmented Push Notification support with tag expressions In August we announced the General Availability of Windows Azure Notification Hubs - a powerful Mobile Push Notifications service that makes it easy to send high volume push notifications with low latency from any mobile app back-end.  Notification hubs can be used with any mobile app back-end (including ones built using our Mobile Services capability) and can also be used with back-ends that run in the cloud as well as on-premises. Beginning with the initial release, Notification Hubs allowed developers to send personalized push notifications to both individual users as well as groups of users by interest, by associating their devices with tags representing the logical target of the notification. For example, by registering all devices of customers interested in a favorite MLB team with a corresponding tag, it is possible to broadcast one message to millions of Boston Red Sox fans and another message to millions of St. Louis Cardinals fans with a single API call respectively. New support for using tag expressions to enable advanced customer segmentation With today’s release we are adding support for even more advanced customer targeting.  You can now identify customers that you want to send push notifications to by defining rich tag expressions. With tag expressions, you can now not only broadcast notifications to Boston Red Sox fans, but take that segmenting a step farther and reach more granular segments. This opens up a variety of scenarios, for example: Offers based on multiple preferences—e.g. send a game day vegetarian special to users tagged as both a Boston Red Sox fan AND a vegetarian Push content to multiple segments in a single message—e.g. rain delay information only to users who are tagged as either a Boston Red Sox fan OR a St. Louis Cardinal fan Avoid presenting subsets of a segment with irrelevant content—e.g. season ticket availability reminder to users who are tagged as a Boston Red Sox fan but NOT also a season ticket holder To illustrate with code, consider a restaurant chain app that sends an offer related to a Red Sox vs Cardinals game for users in Boston. Devices can be tagged by your app with location tags (e.g. “Loc:Boston”) and interest tags (e.g. “Follows:RedSox”, “Follows:Cardinals”), and then a notification can be sent by your back-end to “(Follows:RedSox || Follows:Cardinals) && Loc:Boston” in order to deliver an offer to all devices in Boston that follow either the RedSox or the Cardinals. This can be done directly in your server backend send logic using the code below: var notification = new WindowsNotification(messagePayload); hub.SendNotificationAsync(notification, "(Follows:RedSox || Follows:Cardinals) && Loc:Boston"); In your expressions you can use all Boolean operators: AND (&&), OR (||), and NOT (!).  Some other cool use cases for tag expressions that are now supported include: Social: To “all my group except me” - group:id && !user:id Events: Touchdown event is sent to everybody following either team or any of the players involved in the action: Followteam:A || Followteam:B || followplayer:1 || followplayer:2 … Hours: Send notifications at specific times. E.g. Tag devices with time zone and when it is 12pm in Seattle send to: GMT8 && follows:thaifood Versions and platforms: Send a reminder to people still using your first version for Android - version:1.0 && platform:Android For help on getting started with Notification Hubs, visit the Notification Hub documentation center.  Then download the latest NuGet package (or use the Notification Hubs REST APIs directly) to start sending push notifications using tag expressions.  They are really powerful and enable a bunch of great new scenarios. TFS & GIT: Continuous Delivery Support for Web Sites + Cloud Services With today’s Windows Azure release we are making it really easy to enable continuous delivery support with Windows Azure and Team Foundation Services.  Team Foundation Services is a cloud based offering from Microsoft that provides integrated source control (with both TFS and Git support), build server, test execution, collaboration tools, and agile planning support.  It makes it really easy to setup a team project (complete with automated builds and test runners) in the cloud, and it has really rich integration with Visual Studio. With today’s Windows Azure release it is now really easy to enable continuous delivery support with both TFS and Git based repositories hosted using Team Foundation Services.  This enables a workflow where when code is checked in, built successfully on an automated build server, and all tests pass on it – I can automatically have the app deployed on Windows Azure with zero manual intervention or work required. The below screen-shots demonstrate how to quickly setup a continuous delivery workflow to Windows Azure with a Git-based ASP.NET MVC project hosted using Team Foundation Services. Enabling Continuous Delivery to Windows Azure with Team Foundation Services The project I’m going to enable continuous delivery with is a simple ASP.NET MVC project whose source code I’m hosting using Team Foundation Services.  I did this by creating a “SimpleContinuousDeploymentTest” repository there using Git – and then used the new built-in Git tooling support within Visual Studio 2013 to push the source code to it.  Below is a screen-shot of the Git repository hosted within Team Foundation Services: I can access the repository within Visual Studio 2013 and easily make commits with it (as well as branch, merge and do other tasks).  Using VS 2013 I can also setup automated builds to take place in the cloud using Team Foundation Services every time someone checks in code to the repository: The cool thing about this is that I don’t have to buy or rent my own build server – Team Foundation Services automatically maintains its own build server farm and can automatically queue up a build for me (for free) every time someone checks in code using the above settings.  This build server (and automated testing) support now works with both TFS and Git based source control repositories. Connecting a Team Foundation Services project to Windows Azure Once I have a source repository hosted in Team Foundation Services with Automated Builds and Testing set up, I can then go even further and set it up so that it will be automatically deployed to Windows Azure when a source code commit is made to the repository (assuming the Build + Tests pass).  Enabling this is now really easy.  To set this up with a Windows Azure Web Site simply use the New->Compute->Web Site->Custom Create command inside the Windows Azure Management Portal.  This will create a dialog like below.  I gave the web site a name and then made sure the “Publish from source control” checkbox was selected: When we click next we’ll be prompted for the location of the source repository.  We’ll select “Team Foundation Services”: Once we do this we’ll be prompted for our Team Foundation Services account that our source repository is hosted under (in this case my TFS account is “scottguthrie”): When we click the “Authorize Now” button we’ll be prompted to give Windows Azure permissions to connect to the Team Foundation Services account.  Once we do this we’ll be prompted to pick the source repository we want to connect to.  Starting with today’s Windows Azure release you can now connect to both TFS and Git based source repositories.  This new support allows me to connect to the “SimpleContinuousDeploymentTest” respository we created earlier: Clicking the finish button will then create the Web Site with the continuous delivery hooks setup with Team Foundation Services.  Now every time someone pushes source control to the repository in Team Foundation Services, it will kick off an automated build, run all of the unit tests in the solution , and if they pass the app will be automatically deployed to our Web Site in Windows Azure.  You can monitor the history and status of these automated deployments using the Deployments tab within the Web Site: This enables a really slick continuous delivery workflow, and enables you to build and deploy apps in a really nice way. Developer Analytics: New Relic support for Web Sites + Mobile Services With today’s Windows Azure release we are making it really easy to enable Developer Analytics and Monitoring support with both Windows Azure Web Site and Windows Azure Mobile Services.  We are partnering with New Relic, who provide a great dev analytics and app performance monitoring offering, to enable this - and we have updated the Windows Azure Management Portal to make it really easy to configure. Enabling New Relic with a Windows Azure Web Site Enabling New Relic support with a Windows Azure Web Site is now really easy.  Simply navigate to the Configure tab of a Web Site and scroll down to the “developer analytics” section that is now within it: Clicking the “add-on” button will display some additional UI.  If you don’t already have a New Relic subscription, you can click the “view windows azure store” button to obtain a subscription (note: New Relic has a perpetually free tier so you can enable it even without paying anything): Clicking the “view windows azure store” button will launch the integrated Windows Azure Store experience we have within the Windows Azure Management Portal.  You can use this to browse from a variety of great add-on services – including New Relic: Select “New Relic” within the dialog above, then click the next button, and you’ll be able to choose which type of New Relic subscription you wish to purchase.  For this demo we’ll simply select the “Free Standard Version” – which does not cost anything and can be used forever:  Once we’ve signed-up for our New Relic subscription and added it to our Windows Azure account, we can go back to the Web Site’s configuration tab and choose to use the New Relic add-on with our Windows Azure Web Site.  We can do this by simply selecting it from the “add-on” dropdown (it is automatically populated within it once we have a New Relic subscription in our account): Clicking the “Save” button will then cause the Windows Azure Management Portal to automatically populate all of the needed New Relic configuration settings to our Web Site: Deploying the New Relic Agent as part of a Web Site The final step to enable developer analytics using New Relic is to add the New Relic runtime agent to our web app.  We can do this within Visual Studio by right-clicking on our web project and selecting the “Manage NuGet Packages” context menu: This will bring up the NuGet package manager.  You can search for “New Relic” within it to find the New Relic agent.  Note that there is both a 32-bit and 64-bit edition of it – make sure to install the version that matches how your Web Site is running within Windows Azure (note: you can configure your Web Site to run in either 32-bit or 64-bit mode using the Web Site’s “Configuration” tab within the Windows Azure Management Portal): Once we install the NuGet package we are all set to go.  We’ll simply re-publish the web site again to Windows Azure and New Relic will now automatically start monitoring the application Monitoring a Web Site using New Relic Now that the application has developer analytics support with New Relic enabled, we can launch the New Relic monitoring portal to start monitoring the health of it.  We can do this by clicking on the “Add Ons” tab in the left-hand side of the Windows Azure Management Portal.  Then select the New Relic add-on we signed-up for within it.  The Windows Azure Management Portal will provide some default information about the add-on when we do this.  Clicking the “Manage” button in the tray at the bottom will launch a new browser tab and single-sign us into the New Relic monitoring portal associated with our account: When we do this a new browser tab will launch with the New Relic admin tool loaded within it: We can now see insights into how our app is performing – without having to have written a single line of monitoring code.  The New Relic service provides a ton of great built-in monitoring features allowing us to quickly see: Performance times (including browser rendering speed) for the overall site and individual pages.  You can optionally set alert thresholds to trigger if the speed does not meet a threshold you specify. Information about where in the world your customers are hitting the site from (and how performance varies by region) Details on the latency performance of external services your web apps are using (for example: SQL, Storage, Twitter, etc) Error information including call stack details for exceptions that have occurred at runtime SQL Server profiling information – including which queries executed against your database and what their performance was And a whole bunch more… The cool thing about New Relic is that you don’t need to write monitoring code within your application to get all of the above reports (plus a lot more).  The New Relic agent automatically enables the CLR profiler within applications and automatically captures the information necessary to identify these.  This makes it super easy to get started and immediately have a rich developer analytics view for your solutions with very little effort. If you haven’t tried New Relic out yet with Windows Azure I recommend you do so – I think you’ll find it helps you build even better cloud applications.  Following the above steps will help you get started and deliver you a really good application monitoring solution in only minutes. Service Bus: Support for partitioned queues and topics With today’s release, we are enabling support within Service Bus for partitioned queues and topics. Enabling partitioning enables you to achieve a higher message throughput and better availability from your queues and topics. Higher message throughput is achieved by implementing multiple message brokers for each partitioned queue and topic.  The  multiple messaging stores will also provide higher availability. You can create a partitioned queue or topic by simply checking the Enable Partitioning option in the custom create wizard for a Queue or Topic: Read this article to learn more about partitioned queues and topics and how to take advantage of them today. Billing: New Billing Alert Service Today’s Windows Azure update enables a new Billing Alert Service Preview that enables you to get proactive email notifications when your Windows Azure bill goes above a certain monetary threshold that you configure.  This makes it easier to manage your bill and avoid potential surprises at the end of the month. With the Billing Alert Service Preview, you can now create email alerts to monitor and manage your monetary credits or your current bill total.  To set up an alert first sign-up for the free Billing Alert Service Preview.  Then visit the account management page, click on a subscription you have setup, and then navigate to the new Alerts tab that is available: The alerts tab allows you to setup email alerts that will be sent automatically once a certain threshold is hit.  For example, by clicking the “add alert” button above I can setup a rule to send myself email anytime my Windows Azure bill goes above $100 for the month: The Billing Alert Service will evolve to support additional aspects of your bill as well as support multiple forms of alerts such as SMS.  Try out the new Billing Alert Service Preview today and give us feedback. Summary Today’s Windows Azure release enables a ton of great new scenarios, and makes building applications hosted in the cloud even easier. If you don’t already have a Windows Azure account, you can sign-up for a free trial and start using all of the above features today.  Then visit the Windows Azure Developer Center to learn more about how to build apps with it. Hope this helps, Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

  • An Xml Serializable PropertyBag Dictionary Class for .NET

    - by Rick Strahl
    I don't know about you but I frequently need property bags in my applications to store and possibly cache arbitrary data. Dictionary<T,V> works well for this although I always seem to be hunting for a more specific generic type that provides a string key based dictionary. There's string dictionary, but it only works with strings. There's Hashset<T> but it uses the actual values as keys. In most key value pair situations for me string is key value to work off. Dictionary<T,V> works well enough, but there are some issues with serialization of dictionaries in .NET. The .NET framework doesn't do well serializing IDictionary objects out of the box. The XmlSerializer doesn't support serialization of IDictionary via it's default serialization, and while the DataContractSerializer does support IDictionary serialization it produces some pretty atrocious XML. What doesn't work? First off Dictionary serialization with the Xml Serializer doesn't work so the following fails: [TestMethod] public void DictionaryXmlSerializerTest() { var bag = new Dictionary<string, object>(); bag.Add("key", "Value"); bag.Add("Key2", 100.10M); bag.Add("Key3", Guid.NewGuid()); bag.Add("Key4", DateTime.Now); bag.Add("Key5", true); bag.Add("Key7", new byte[3] { 42, 45, 66 }); TestContext.WriteLine(this.ToXml(bag)); } public string ToXml(object obj) { if (obj == null) return null; StringWriter sw = new StringWriter(); XmlSerializer ser = new XmlSerializer(obj.GetType()); ser.Serialize(sw, obj); return sw.ToString(); } The error you get with this is: System.NotSupportedException: The type System.Collections.Generic.Dictionary`2[[System.String, mscorlib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089],[System.Object, mscorlib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089]] is not supported because it implements IDictionary. Got it! BTW, the same is true with binary serialization. Running the same code above against the DataContractSerializer does work: [TestMethod] public void DictionaryDataContextSerializerTest() { var bag = new Dictionary<string, object>(); bag.Add("key", "Value"); bag.Add("Key2", 100.10M); bag.Add("Key3", Guid.NewGuid()); bag.Add("Key4", DateTime.Now); bag.Add("Key5", true); bag.Add("Key7", new byte[3] { 42, 45, 66 }); TestContext.WriteLine(this.ToXmlDcs(bag)); } public string ToXmlDcs(object value, bool throwExceptions = false) { var ser = new DataContractSerializer(value.GetType(), null, int.MaxValue, true, false, null); MemoryStream ms = new MemoryStream(); ser.WriteObject(ms, value); return Encoding.UTF8.GetString(ms.ToArray(), 0, (int)ms.Length); } This DOES work but produces some pretty heinous XML (formatted with line breaks and indentation here): <ArrayOfKeyValueOfstringanyType xmlns="http://schemas.microsoft.com/2003/10/Serialization/Arrays" xmlns:i="http://www.w3.org/2001/XMLSchema-instance"> <KeyValueOfstringanyType> <Key>key</Key> <Value i:type="a:string" xmlns:a="http://www.w3.org/2001/XMLSchema">Value</Value> </KeyValueOfstringanyType> <KeyValueOfstringanyType> <Key>Key2</Key> <Value i:type="a:decimal" xmlns:a="http://www.w3.org/2001/XMLSchema">100.10</Value> </KeyValueOfstringanyType> <KeyValueOfstringanyType> <Key>Key3</Key> <Value i:type="a:guid" xmlns:a="http://schemas.microsoft.com/2003/10/Serialization/">2cd46d2a-a636-4af4-979b-e834d39b6d37</Value> </KeyValueOfstringanyType> <KeyValueOfstringanyType> <Key>Key4</Key> <Value i:type="a:dateTime" xmlns:a="http://www.w3.org/2001/XMLSchema">2011-09-19T17:17:05.4406999-07:00</Value> </KeyValueOfstringanyType> <KeyValueOfstringanyType> <Key>Key5</Key> <Value i:type="a:boolean" xmlns:a="http://www.w3.org/2001/XMLSchema">true</Value> </KeyValueOfstringanyType> <KeyValueOfstringanyType> <Key>Key7</Key> <Value i:type="a:base64Binary" xmlns:a="http://www.w3.org/2001/XMLSchema">Ki1C</Value> </KeyValueOfstringanyType> </ArrayOfKeyValueOfstringanyType> Ouch! That seriously hurts the eye! :-) Worse though it's extremely verbose with all those repetitive namespace declarations. It's good to know that it works in a pinch, but for a human readable/editable solution or something lightweight to store in a database it's not quite ideal. Why should I care? As a little background, in one of my applications I have a need for a flexible property bag that is used on a free form database field on an otherwise static entity. Basically what I have is a standard database record to which arbitrary properties can be added in an XML based string field. I intend to expose those arbitrary properties as a collection from field data stored in XML. The concept is pretty simple: When loading write the data to the collection, when the data is saved serialize the data into an XML string and store it into the database. When reading the data pick up the XML and if the collection on the entity is accessed automatically deserialize the XML into the Dictionary. (I'll talk more about this in another post). While the DataContext Serializer would work, it's verbosity is problematic both for size of the generated XML strings and the fact that users can manually edit this XML based property data in an advanced mode. A clean(er) layout certainly would be preferable and more user friendly. Custom XMLSerialization with a PropertyBag Class So… after a bunch of experimentation with different serialization formats I decided to create a custom PropertyBag class that provides for a serializable Dictionary. It's basically a custom Dictionary<TType,TValue> implementation with the keys always set as string keys. The result are PropertyBag<TValue> and PropertyBag (which defaults to the object type for values). The PropertyBag<TType> and PropertyBag classes provide these features: Subclassed from Dictionary<T,V> Implements IXmlSerializable with a cleanish XML format ToXml() and FromXml() methods to export and import to and from XML strings Static CreateFromXml() method to create an instance It's simple enough as it's merely a Dictionary<string,object> subclass but that supports serialization to a - what I think at least - cleaner XML format. The class is super simple to use: [TestMethod] public void PropertyBagTwoWayObjectSerializationTest() { var bag = new PropertyBag(); bag.Add("key", "Value"); bag.Add("Key2", 100.10M); bag.Add("Key3", Guid.NewGuid()); bag.Add("Key4", DateTime.Now); bag.Add("Key5", true); bag.Add("Key7", new byte[3] { 42,45,66 } ); bag.Add("Key8", null); bag.Add("Key9", new ComplexObject() { Name = "Rick", Entered = DateTime.Now, Count = 10 }); string xml = bag.ToXml(); TestContext.WriteLine(bag.ToXml()); bag.Clear(); bag.FromXml(xml); Assert.IsTrue(bag["key"] as string == "Value"); Assert.IsInstanceOfType( bag["Key3"], typeof(Guid)); Assert.IsNull(bag["Key8"]); //Assert.IsNull(bag["Key10"]); Assert.IsInstanceOfType(bag["Key9"], typeof(ComplexObject)); } This uses the PropertyBag class which uses a PropertyBag<string,object> - which means it returns untyped values of type object. I suspect for me this will be the most common scenario as I'd want to store arbitrary values in the PropertyBag rather than one specific type. The same code with a strongly typed PropertyBag<decimal> looks like this: [TestMethod] public void PropertyBagTwoWayValueTypeSerializationTest() { var bag = new PropertyBag<decimal>(); bag.Add("key", 10M); bag.Add("Key1", 100.10M); bag.Add("Key2", 200.10M); bag.Add("Key3", 300.10M); string xml = bag.ToXml(); TestContext.WriteLine(bag.ToXml()); bag.Clear(); bag.FromXml(xml); Assert.IsTrue(bag.Get("Key1") == 100.10M); Assert.IsTrue(bag.Get("Key3") == 300.10M); } and produces typed results of type decimal. The types can be either value or reference types the combination of which actually proved to be a little more tricky than anticipated due to null and specific string value checks required - getting the generic typing right required use of default(T) and Convert.ChangeType() to trick the compiler into playing nice. Of course the whole raison d'etre for this class is the XML serialization. You can see in the code above that we're doing a .ToXml() and .FromXml() to serialize to and from string. The XML produced for the first example looks like this: <?xml version="1.0" encoding="utf-8"?> <properties> <item> <key>key</key> <value>Value</value> </item> <item> <key>Key2</key> <value type="decimal">100.10</value> </item> <item> <key>Key3</key> <value type="___System.Guid"> <guid>f7a92032-0c6d-4e9d-9950-b15ff7cd207d</guid> </value> </item> <item> <key>Key4</key> <value type="datetime">2011-09-26T17:45:58.5789578-10:00</value> </item> <item> <key>Key5</key> <value type="boolean">true</value> </item> <item> <key>Key7</key> <value type="base64Binary">Ki1C</value> </item> <item> <key>Key8</key> <value type="nil" /> </item> <item> <key>Key9</key> <value type="___Westwind.Tools.Tests.PropertyBagTest+ComplexObject"> <ComplexObject> <Name>Rick</Name> <Entered>2011-09-26T17:45:58.5789578-10:00</Entered> <Count>10</Count> </ComplexObject> </value> </item> </properties>   The format is a bit cleaner than the DataContractSerializer. Each item is serialized into <key> <value> pairs. If the value is a string no type information is written. Since string tends to be the most common type this saves space and serialization processing. All other types are attributed. Simple types are mapped to XML types so things like decimal, datetime, boolean and base64Binary are encoded using their Xml type values. All other types are embedded with a hokey format that describes the .NET type preceded by a three underscores and then are encoded using the XmlSerializer. You can see this best above in the ComplexObject encoding. For custom types this isn't pretty either, but it's more concise than the DCS and it works as long as you're serializing back and forth between .NET clients at least. The XML generated from the second example that uses PropertyBag<decimal> looks like this: <?xml version="1.0" encoding="utf-8"?> <properties> <item> <key>key</key> <value type="decimal">10</value> </item> <item> <key>Key1</key> <value type="decimal">100.10</value> </item> <item> <key>Key2</key> <value type="decimal">200.10</value> </item> <item> <key>Key3</key> <value type="decimal">300.10</value> </item> </properties>   How does it work As I mentioned there's nothing fancy about this solution - it's little more than a subclass of Dictionary<T,V> that implements custom Xml Serialization and a couple of helper methods that facilitate getting the XML in and out of the class more easily. But it's proven very handy for a number of projects for me where dynamic data storage is required. Here's the code: /// <summary> /// Creates a serializable string/object dictionary that is XML serializable /// Encodes keys as element names and values as simple values with a type /// attribute that contains an XML type name. Complex names encode the type /// name with type='___namespace.classname' format followed by a standard xml /// serialized format. The latter serialization can be slow so it's not recommended /// to pass complex types if performance is critical. /// </summary> [XmlRoot("properties")] public class PropertyBag : PropertyBag<object> { /// <summary> /// Creates an instance of a propertybag from an Xml string /// </summary> /// <param name="xml">Serialize</param> /// <returns></returns> public static PropertyBag CreateFromXml(string xml) { var bag = new PropertyBag(); bag.FromXml(xml); return bag; } } /// <summary> /// Creates a serializable string for generic types that is XML serializable. /// /// Encodes keys as element names and values as simple values with a type /// attribute that contains an XML type name. Complex names encode the type /// name with type='___namespace.classname' format followed by a standard xml /// serialized format. The latter serialization can be slow so it's not recommended /// to pass complex types if performance is critical. /// </summary> /// <typeparam name="TValue">Must be a reference type. For value types use type object</typeparam> [XmlRoot("properties")] public class PropertyBag<TValue> : Dictionary<string, TValue>, IXmlSerializable { /// <summary> /// Not implemented - this means no schema information is passed /// so this won't work with ASMX/WCF services. /// </summary> /// <returns></returns> public System.Xml.Schema.XmlSchema GetSchema() { return null; } /// <summary> /// Serializes the dictionary to XML. Keys are /// serialized to element names and values as /// element values. An xml type attribute is embedded /// for each serialized element - a .NET type /// element is embedded for each complex type and /// prefixed with three underscores. /// </summary> /// <param name="writer"></param> public void WriteXml(System.Xml.XmlWriter writer) { foreach (string key in this.Keys) { TValue value = this[key]; Type type = null; if (value != null) type = value.GetType(); writer.WriteStartElement("item"); writer.WriteStartElement("key"); writer.WriteString(key as string); writer.WriteEndElement(); writer.WriteStartElement("value"); string xmlType = XmlUtils.MapTypeToXmlType(type); bool isCustom = false; // Type information attribute if not string if (value == null) { writer.WriteAttributeString("type", "nil"); } else if (!string.IsNullOrEmpty(xmlType)) { if (xmlType != "string") { writer.WriteStartAttribute("type"); writer.WriteString(xmlType); writer.WriteEndAttribute(); } } else { isCustom = true; xmlType = "___" + value.GetType().FullName; writer.WriteStartAttribute("type"); writer.WriteString(xmlType); writer.WriteEndAttribute(); } // Actual deserialization if (!isCustom) { if (value != null) writer.WriteValue(value); } else { XmlSerializer ser = new XmlSerializer(value.GetType()); ser.Serialize(writer, value); } writer.WriteEndElement(); // value writer.WriteEndElement(); // item } } /// <summary> /// Reads the custom serialized format /// </summary> /// <param name="reader"></param> public void ReadXml(System.Xml.XmlReader reader) { this.Clear(); while (reader.Read()) { if (reader.NodeType == XmlNodeType.Element && reader.Name == "key") { string xmlType = null; string name = reader.ReadElementContentAsString(); // item element reader.ReadToNextSibling("value"); if (reader.MoveToNextAttribute()) xmlType = reader.Value; reader.MoveToContent(); TValue value; if (xmlType == "nil") value = default(TValue); // null else if (string.IsNullOrEmpty(xmlType)) { // value is a string or object and we can assign TValue to value string strval = reader.ReadElementContentAsString(); value = (TValue) Convert.ChangeType(strval, typeof(TValue)); } else if (xmlType.StartsWith("___")) { while (reader.Read() && reader.NodeType != XmlNodeType.Element) { } Type type = ReflectionUtils.GetTypeFromName(xmlType.Substring(3)); //value = reader.ReadElementContentAs(type,null); XmlSerializer ser = new XmlSerializer(type); value = (TValue)ser.Deserialize(reader); } else value = (TValue)reader.ReadElementContentAs(XmlUtils.MapXmlTypeToType(xmlType), null); this.Add(name, value); } } } /// <summary> /// Serializes this dictionary to an XML string /// </summary> /// <returns>XML String or Null if it fails</returns> public string ToXml() { string xml = null; SerializationUtils.SerializeObject(this, out xml); return xml; } /// <summary> /// Deserializes from an XML string /// </summary> /// <param name="xml"></param> /// <returns>true or false</returns> public bool FromXml(string xml) { this.Clear(); // if xml string is empty we return an empty dictionary if (string.IsNullOrEmpty(xml)) return true; var result = SerializationUtils.DeSerializeObject(xml, this.GetType()) as PropertyBag<TValue>; if (result != null) { foreach (var item in result) { this.Add(item.Key, item.Value); } } else // null is a failure return false; return true; } /// <summary> /// Creates an instance of a propertybag from an Xml string /// </summary> /// <param name="xml"></param> /// <returns></returns> public static PropertyBag<TValue> CreateFromXml(string xml) { var bag = new PropertyBag<TValue>(); bag.FromXml(xml); return bag; } } } The code uses a couple of small helper classes SerializationUtils and XmlUtils for mapping Xml types to and from .NET, both of which are from the WestWind,Utilities project (which is the same project where PropertyBag lives) from the West Wind Web Toolkit. The code implements ReadXml and WriteXml for the IXmlSerializable implementation using old school XmlReaders and XmlWriters (because it's pretty simple stuff - no need for XLinq here). Then there are two helper methods .ToXml() and .FromXml() that basically allow your code to easily convert between XML and a PropertyBag object. In my code that's what I use to actually to persist to and from the entity XML property during .Load() and .Save() operations. It's sweet to be able to have a string key dictionary and then be able to turn around with 1 line of code to persist the whole thing to XML and back. Hopefully some of you will find this class as useful as I've found it. It's a simple solution to a common requirement in my applications and I've used the hell out of it in the  short time since I created it. Resources You can find the complete code for the two classes plus the helpers in the Subversion repository for Westwind.Utilities. You can grab the source files from there or download the whole project. You can also grab the full Westwind.Utilities assembly from NuGet and add it to your project if that's easier for you. PropertyBag Source Code SerializationUtils and XmlUtils Westwind.Utilities Assembly on NuGet (add from Visual Studio) © Rick Strahl, West Wind Technologies, 2005-2011Posted in .NET  CSharp   Tweet (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • E: mkinitramfs failure cpio 141 gzip 1

    - by Nagaraj Shindagi
    I'm using Ubuntu 12.04 LTS with Dell power-edge R720 server, facing the problem when I apt-get -f install Reading package lists... Done Building dependency tree Reading state information... Done 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 2 not fully installed or removed. After this operation, 0 B of additional disk space will be used. Setting up linux-image-3.2.0-37-generic-pae (3.2.0-37.58) ... Running depmod. update-initramfs: deferring update (hook will be called later) The link /initrd.img is a dangling linkto /boot/initrd.img-3.2.0-37-generic-pae Examining /etc/kernel/postinst.d. run-parts: executing /etc/kernel/postinst.d/initramfs-tools 3.2.0-37-generic-pae /boot/vmlinuz-3.2.0-37-generic-pae update-initramfs: Generating /boot/initrd.img-3.2.0-37-generic-pae gzip: stdout: No space left on device E: mkinitramfs failure cpio 141 gzip 1 update-initramfs: failed for /boot/initrd.img-3.2.0-37-generic-pae with 1. run-parts: /etc/kernel/postinst.d/initramfs-tools exited with return code 1 Failed to process /etc/kernel/postinst.d at /var/lib/dpkg/info/linux-image-3.2.0 -37-generic-pae.postinst line 1010. dpkg: error processing linux-image-3.2.0-37-generic-pae (--configure): subprocess installed post-installation script returned error exit status 2 dpkg: dependency problems prevent configuration of linux-image-generic-pae: linux-image-generic-pae depends on linux-image-3.2.0-37-generic-pae; however: Package linux-image-3.2.0-37-generic-pae is not configured yet. dpkg: error processing linux-image-generic-pae (--configure): dependency problems - leaving unconfigured No apport report written because the error message indicates its a followup erro r from a previous failure. Errors were encountered while processing: linux-image-3.2.0-37-generic-pae linux-image-generic-pae E: Sub-process /usr/bin/dpkg returned an error code (1) ------------ even i tried with apt-get clean apt-get remove apt-get autoremove apt-get purge there is no difference it will show the same error message as above, even i checked the disk space ----------- Filesystem 1K-blocks Used Available Use% Mounted on /dev/sda6 24030076 612456 22196964 3% / udev 16536644 4 16536640 1% /dev tmpfs 6618884 1164 6617720 1% /run none 5120 0 5120 0% /run/lock none 16547208 72 16547136 1% /run/shm cgroup 16547208 0 16547208 0% /sys/fs/cgroup /dev/sda1 93207 75034 13361 85% /boot /dev/sda10 9611492 1096076 8027176 13% /tmp /dev/sda12 9611492 226340 8896912 3% /opt /dev/sda13 9611492 152516 8970736 2% /srv /dev/sda7 9611492 592208 8531044 7% /home /dev/sda8 9611492 2656736 6466516 30% /usr /dev/sda9 9611492 696468 8426784 8% /var /dev/sda14 961237336 134563516 777845764 15% /usr/data /dev/sda15 618991384 84498388 503050052 15% /usr/data1 /dev/sda11 9611492 152616 8970636 2% /usr/local --------------- is there any problem on allotting the space to the partiations please let me know the solution its on urgent please help me on this issue regards

    Read the article

  • HPC Server Dynamic Job Scheduling: when jobs spawn jobs

    - by JoshReuben
    HPC Job Types HPC has 3 types of jobs http://technet.microsoft.com/en-us/library/cc972750(v=ws.10).aspx · Task Flow – vanilla sequence · Parametric Sweep – concurrently run multiple instances of the same program, each with a different work unit input · MPI – message passing between master & slave tasks But when you try go outside the box – job tasks that spawn jobs, blocking the parent task – you run the risk of resource starvation, deadlocks, and recursive, non-converging or exponential blow-up. The solution to this is to write some performance monitoring and job scheduling code. You can do this in 2 ways: manually control scheduling - allocate/ de-allocate resources, change job priorities, pause & resume tasks , restrict long running tasks to specific compute clusters Semi-automatically - set threshold params for scheduling. How – Control Job Scheduling In order to manage the tasks and resources that are associated with a job, you will need to access the ISchedulerJob interface - http://msdn.microsoft.com/en-us/library/microsoft.hpc.scheduler.ischedulerjob_members(v=vs.85).aspx This really allows you to control how a job is run – you can access & tweak the following features: max / min resource values whether job resources can grow / shrink, and whether jobs can be pre-empted, whether the job is exclusive per node the creator process id & the job pool timestamp of job creation & completion job priority, hold time & run time limit Re-queue count Job progress Max/ min Number of cores, nodes, sockets, RAM Dynamic task list – can add / cancel jobs on the fly Job counters When – poll perf counters Tweaking the job scheduler should be done on the basis of resource utilization according to PerfMon counters – HPC exposes 2 Perf objects: Compute Clusters, Compute Nodes http://technet.microsoft.com/en-us/library/cc720058(v=ws.10).aspx You can monitor running jobs according to dynamic thresholds – use your own discretion: Percentage processor time Number of running jobs Number of running tasks Total number of processors Number of processors in use Number of processors idle Number of serial tasks Number of parallel tasks Design Your algorithms correctly Finally , don’t assume you have unlimited compute resources in your cluster – design your algorithms with the following factors in mind: · Branching factor - http://en.wikipedia.org/wiki/Branching_factor - dynamically optimize the number of children per node · cutoffs to prevent explosions - http://en.wikipedia.org/wiki/Limit_of_a_sequence - not all functions converge after n attempts. You also need a threshold of good enough, diminishing returns · heuristic shortcuts - http://en.wikipedia.org/wiki/Heuristic - sometimes an exhaustive search is impractical and short cuts are suitable · Pruning http://en.wikipedia.org/wiki/Pruning_(algorithm) – remove / de-prioritize unnecessary tree branches · avoid local minima / maxima - http://en.wikipedia.org/wiki/Local_minima - sometimes an algorithm cant converge because it gets stuck in a local saddle – try simulated annealing, hill climbing or genetic algorithms to get out of these ruts   watch out for rounding errors – http://en.wikipedia.org/wiki/Round-off_error - multiple iterations can in parallel can quickly amplify & blow up your algo ! Use an epsilon, avoid floating point errors,  truncations, approximations Happy Coding !

    Read the article

  • Limiting DOPs &ndash; Who rules over whom?

    - by jean-pierre.dijcks
    I've gotten a couple of questions from Dan Morgan and figured I start to answer them in this way. While Dan is running on a big system he is running with Database Resource Manager and he is trying to make sure the system doesn't go crazy (remember end user are never, ever crazy!) on very high DOPs. Q: How do I control statements with very high DOPs driven from user hints in queries? A: The best way to do this is to work with DBRM and impose limits on consumer groups. The Max DOP setting you can set in DBRM allows you to overwrite the hint. Now let's go into some more detail here. Assume my object (and for simplicity we assume there is a single object - and do remember that we always pick the highest DOP when in doubt and when conflicting DOPs are available in a query) has PARALLEL 64 as its setting. Assume that the query that selects something cool from that table lives in a consumer group with a max DOP of 32. Assume no goofy things (like running out of parallel_max_servers) are happening. A query selecting from this table will run at DOP 32 because DBRM caps the DOP. As of 11.2.0.1 we also use the DBRM cap to create the original plan (at compile time) and not just enforce the cap at runtime. Now, my user is smart and writes a query with a parallel hint requesting DOP 128. This query is still capped by DBRM and DBRM overrules the hint in the statement. The statement, despite the hint, runs at DOP 32. Note that in the hinted scenario we do compile the statement with DOP 128 (the optimizer obeys the hint). This is another reason to use table decoration rather than hints. Q: What happens if I set parallel_max_servers higher than processes (e.g. the max number of processes allowed to run on my machine)? A: Processes rules. It is important to understand that processes are fixed at startup time. If you increase parallel_max_servers above the number of processes in the processes parameter you should get a warning in the alert log stating it can not take effect. As a follow up, a hinted query requesting more parallel processes than either parallel_max_servers or processes will not be able to acquire the requested number. Parallel_max_processes will prevent this. And since parallel_max_servers should be lower than max processes you can never go over either...

    Read the article

  • A few things I learned regarding Azure billing policies

    - by Vincent Grondin
    An hour of small computing time: 0,12$ per hour A Gig of storage in the cloud: 0,15$ per hour 1 Gig of relational database using Azure SQL: 9,99$  per month A Visual Studio Professional with MSDN Premium account: 2500$ per year Winning an MSDN Professional account that comes preloaded with 750 free hours of Azure per month:  PRICELESS !!!      But was it really free???? Hmmm… Let’s see.....   Here's a few things I learned regarding Azure billing policies when I attended a promotional training at Microsoft last week...   1)  An instance deployed in the cloud really means whatever you upload in there... it doesn't matter if it's in STAGING OR PRODUCTION!!!!   Your MSDN account comes with 750 free hours of small computing time per month which should be enough hours per month for one instance of one application deployed in the cloud...  So we're cool, the application you run in the cloud doesn't cost you a penny....  BUT the one that's in staging is still consuming time!!!   So if you don’t want to end up having to pay 42$ at the end of the month on your credit card like this happened to a friend of mine, DELETE them staging applications once you’ve put them in production! This also applies to the instance count you can modify in the configuration file… So stop and think before you decide you want to spawn 50 of those hello world apps  .     2) If you have an MSDN account, then you have the promotional 750 hours of Azure credits per month and can use the Azure credits to explore the Cloud! But be aware, this promotion ends in 8 months (maybe more like 7 now) and then you will most likely go back to the standard 250 hours of Azure credits. If you do not delete your applications by then, you’ll get billed for the extra hours, believe me…   There is a switch that you can toggle and which will STOP your automatic enrollment after the promotion and prevent you from renewing the Azure Account automatically. Yes the default setting is to automatically renew your account and remember, you entered your credit card information in the registration process so, yes, you WILL be billed…  Go disable that ASAP    Log into your account, go to “Windows Azure Platform” then click the “Subscriptions” tab and on the right side, you’ll see a drop down with different “Actions” into it… Choose “Opt out of auto renew” and, NOW you’re safe…   Still, this is a great offer by Microsoft and I think everyone that has a chance should play a bit with Azure to get to know this technology a bit more...     Happy Cloud Computing All

    Read the article

  • Multi-Threaded Application vs. Single Threaded Application

    Why would we use a multi threaded application vs. a single threaded application? First we must define multithreading. Multithreading is a feature of an operating system that allows programs to run subcomponents or threads in parallel. Typically most applications only need to use one thread because they do not perform time consuming tasks. The use of multiple threads allows an application to distribute long running tasks so that they can be executed in parallel. This gives the user the perceived appearance that the application is working faster due to the fact that while one thread is waiting on an IO process the remaining tasks can make use of the available CPU. The allows working threads to execute in tandem so that they can be competed sooner. Multithreading Benefits Improved responsiveness — Users usually report improved responsiveness compared to single thread applications. Faster applications — Multiple threads can lead to improved application performance. Prioritization — Threads can be assigned a priority which would allow higher priority tasks to take precedence over lower priority tasks. Single Threading Benefits Programming and debugging —These activities are easier compared to multithreaded applications due to the reduced complexity Less Overhead — Threads add overhead to an application When developing multi-threaded applications, the following must be considered. Deadlocks occur when two threads hold a monitor that the other one requires. In essence each task is blocking the other and both tasks are waiting for the other monitor to be released. This forces an application to hang or deadlock. Resource allocation is used to prevent deadlocks because the system determines if approving the resource request will render the system in an unsafe state. An unsafe state could result in a deadlock. The system only approves requests that will lead to safe states. Thread Synchronization is used when multiple threads use the same instance of an object. The threads accessing the object can then be locked and then synchronized so that each task can interact with the static object on at a time.

    Read the article

  • Announcing Oracle Enterprise Content Management Suite 11g

    - by [email protected]
    Today Oracle announced Oracle Enterprise Content Management Suite 11g. This is a major release for us, and reinforces our three key themes at Oracle: Complete New in this release - Oracle ECM Suite 11g is built on a single, unified repository. Every piece of content - documents, HTML pages, digital assets, scanned images - is stored and accessbile directly from the repository, whether you are working on websites, creating brand logos, processing accounts payable invoices, or running records and retention functions. It makes complete, end-to-end management of content possible, from the point it enters the organization, through its entire lifecycle. Also new in this release, the installation, access, monitoring and administration of Oracle ECM Suite 11g is centralized. As a complete system, organizations can lower the costs of training and usage by having a centralized source of information that is easily administered. As part of this new unified repository release, Oracle has released a benchmarking white paper that shows the extreme performance and scalability of Oracle ECM Suite. When tested on a two node UCM Server running on Sun Oracle DB Machine Half Rack Hardware with an Exadata storage server, Oracle ECM Suite 11g is able to ingest over 178 million documents per day. Open Oracle ECM Suite 11g is built on a service-oriented architecture. All functions are available through standards-based services calls in Web Services or Java. In this release Oracle unveils Open Web Content Management. Open Web Content Management is a revolutionary approach to web content management that decouples the content management process from the process of creating web applications. One piece of this approach is our one-click web content management. With one click, a web application builder can drag content services into their application, enabling their users to also edit content with just one click. Open Web Content Management is also open because it enables Web developers to add Web content management to new and existing JavaServer Pages (JSP), JavaServer Faces (JSF) and Oracle Application Development Framework (ADF) Faces applications Open content distribution - Oracle ECM Suite 11g offers flexible deployment options with a built-in smart cache so organizations can deliver Web sites or Web applications without requiring Oracle ECM Suite as part of the delivery system Integrated Oracle ECM Suite 11g also offers a series of next generation desktop integrations, providing integrations such as: New MS Office integration with menus to access managed content, insert managed links, and compare managed documents using standard MS Office reviewing tools Automatic identity tagging of documents on download - to help users understand which versions they are viewing and prevent duplicate content items in the content repository. New "smart productivity folders" to show a users workflow inbox, saved searches and checked out content directly from Windows Explorer Drag and drop metadata pop-ups Check in and check out for all file formats with any standard WebDAV server As part of Oracle's Enterprise Application Documents initiative, Oracle Content Management 11g also provides certified application integrations with solution templates You can read the press release here. You can see more assets at the launch center here. You can sign up for the announcement webinar and hear more about the new features here. You can read the benchmarking study here.

    Read the article

  • MAXDOP in SQL Azure

    - by Herve Roggero
    In my search of better understanding the scalability options of SQL Azure I stumbled on an interesting aspect: Query Hints in SQL Azure. More specifically, the MAXDOP hint. A few years ago I did a lot of analysis on this query hint (see article on SQL Server Central:  http://www.sqlservercentral.com/articles/Configuring/managingmaxdegreeofparallelism/1029/).  Here is a quick synopsis of MAXDOP: It is a query hint you use when issuing a SQL statement that provides you control with how many processors SQL Server will use to execute the query. For complex queries with lots of I/O requirements, more CPUs can mean faster parallel searches. However the impact can be drastic on other running threads/processes. If your query takes all available processors at 100% for 5 minutes... guess what... nothing else works. The bottom line is that more is not always better. The use of MAXDOP is more art than science... and a whole lot of testing; it depends on two things: the underlying hardware architecture and the application design. So there isn't a magic number that will work for everyone... except 1... :) Let me explain. The rules of engagements are different. SQL Azure is about sharing. Yep... you are forced to nice with your neighbors.  To achieve this goal SQL Azure sets the MAXDOP to 1 by default, and ignores the use of the MAXDOP hint altogether. That means that all you queries will use one and only one processor.  It really isn't such a bad thing however. Keep in mind that in some of the largest SQL Server implementations MAXDOP is usually also set to 1. It is a well known configuration setting for large scale implementations. The reason is precisely to prevent rogue statements (like a SELECT * FROM HISTORY) from bringing down your systems (like a report that should have been running on a different in the first place) and to avoid the overhead generated by executing too many parallel queries that could cause internal memory management nightmares to the host Operating System. Is summary, forcing the MAXDOP to 1 in SQL Azure makes sense; it ensures that your database will continue to function normally even if one of the other tenants on the same server is running massive queries that would otherwise bring you down. Last but not least, keep in mind as well that when you test your database code for performance on-premise, make sure to set the DOP to 1 on your SQL Server databases to simulate SQL Azure conditions.

    Read the article

  • How to add items to the Document Types context menu.

    - by Vizioz Limited
    I am currently working on an extension to Umbraco that needed an extra menu item on the Document Types menu in the Settings section, my first thought was to use the BeforeNodeRender event to add the menu item, but this event only fires on the Content and Media menu trees.See: Codeplex Issue 21623The "temporary" solution has been to extend the "loadNodeTypes" class, I say temporary because I assume that the core team may well extend the event functionality to the entire menu tree in the future which would be much better and would prevent you from having to complete override the menu items.This was suggested by Slace on my "is it possible to add an item to the context menu in tree's other than media content" post on the our.umbraco.org site.There are three things you need to do:1) Override the loadNodeTypesusing System.Collections.Generic;using umbraco.interfaces;using umbraco.BusinessLogic.Actions;namespace Vizioz.xMind2Umbraco{ // Note: Remember these menu's might change in future versions of Umbraco public class loadNodeTypesMenu : umbraco.loadNodeTypes { public loadNodeTypesMenu(string application) : base(application) { } protected override void CreateRootNodeActions(ref List<IAction> actions) { actions.Clear(); actions.Add(ActionNew.Instance); actions.Add(xMindImportAction.Instance); actions.Add(ContextMenuSeperator.Instance); actions.Add(ActionImport.Instance); actions.Add(ContextMenuSeperator.Instance); actions.Add(ActionRefresh.Instance); } protected override void CreateAllowedActions(ref List<IAction> actions) { actions.Clear(); actions.Add(ActionCopy.Instance); actions.Add(xMindImportAction.Instance); actions.Add(ContextMenuSeperator.Instance); actions.Add(ActionExport.Instance); actions.Add(ContextMenuSeperator.Instance); actions.Add(ActionDelete.Instance); } }}2) Create a new Action to be used on the menuusing umbraco.interfaces;namespace Vizioz.xMind2Umbraco{ public class xMindImportAction : IAction { #region Implementation of IAction public static xMindImportAction Instance { get { return umbraco.Singleton<xMindImportAction>.Instance; } } public char Letter { get { return 'p'; } } public bool ShowInNotifier { get { return true; } } public bool CanBePermissionAssigned { get { return true; } } public string Icon { get { return "editor/OPEN.gif"; } } public string Alias { get { return "Import from xMind"; } } public string JsFunctionName { get { return "openModal('dialogs/xMindImport.aspx?id=' + nodeID, 'Publish Management', 550, 480);"; } } public string JsSource { get { return string.Empty; } } #endregion }}3) Update the UmbracoAppTree table, my case I changed the following:TreeHandlerTypeFrom: loadNodeTypesTo: loadNodeTypesMenuTreeHandlerAssemblyFrom: umbracoTo: Vizioz.xMind2UmbracoAnd the result is:

    Read the article

  • SQL SERVER – Select and Delete Duplicate Records – SQL in Sixty Seconds #036 – Video

    - by pinaldave
    Developers often face situations when they find their column have duplicate records and they want to delete it. A good developer will never delete any data without observing it and making sure that what is being deleted is the absolutely fine to delete. Before deleting duplicate data, one should select it and see if the data is really duplicate. In this video we are demonstrating two scripts – 1) selects duplicate records 2) deletes duplicate records. We are assuming that the table has a unique incremental id. Additionally, we are assuming that in the case of the duplicate records we would like to keep the latest record. If there is really a business need to keep unique records, one should consider to create a unique index on the column. Unique index will prevent users entering duplicate data into the table from the beginning. This should be the best solution. However, deleting duplicate data is also a very valid request. If user realizes that they need to keep only unique records in the column and if they are willing to create unique constraint, the very first requirement of creating a unique constraint is to delete the duplicate records. Let us see how to connect the values in Sixty Seconds: Here is the script which is used in the video. USE tempdb GO CREATE TABLE TestTable (ID INT, NameCol VARCHAR(100)) GO INSERT INTO TestTable (ID, NameCol) SELECT 1, 'First' UNION ALL SELECT 2, 'Second' UNION ALL SELECT 3, 'Second' UNION ALL SELECT 4, 'Second' UNION ALL SELECT 5, 'Second' UNION ALL SELECT 6, 'Third' GO -- Selecting Data SELECT * FROM TestTable GO -- Detecting Duplicate SELECT NameCol, COUNT(*) TotalCount FROM TestTable GROUP BY NameCol HAVING COUNT(*) > 1 ORDER BY COUNT(*) DESC GO -- Deleting Duplicate DELETE FROM TestTable WHERE ID NOT IN ( SELECT MAX(ID) FROM TestTable GROUP BY NameCol) GO -- Selecting Data SELECT * FROM TestTable GO DROP TABLE TestTable GO Related Tips in SQL in Sixty Seconds: SQL SERVER – Delete Duplicate Records – Rows SQL SERVER – Count Duplicate Records – Rows SQL SERVER – 2005 – 2008 – Delete Duplicate Rows Delete Duplicate Records – Rows – Readers Contribution Unique Nonclustered Index Creation with IGNORE_DUP_KEY = ON – A Transactional Behavior What would you like to see in the next SQL in Sixty Seconds video? Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Database, Pinal Dave, PostADay, SQL, SQL Authority, SQL in Sixty Seconds, SQL Query, SQL Scripts, SQL Server, SQL Server Management Studio, SQL Tips and Tricks, T SQL, Technology, Video Tagged: Excel

    Read the article

  • CodeStock 2012 Review: Michael Eaton( @mjeaton ) - 3 Simple Things for Increased Productivity

    3 Simple Things for Increased ProductivitySpeaker: Michael EatonTwitter: @mjeatonBlog: http://mjeaton.net/blog This was the first time I had seen Michael Eaton speak but have hear a lot of really good things about his speaking abilities. Needless to say I was really looking forward to his session. He basically addressed the topic of distractions and how they can decrease or increase your productivity as a developer. He makes the case that in order to become more productive you must block/limit all distractions. For example, he covered his top distractions as a developer. Top Distractions Social Media(Twitter, Reddit, Facebook) Wiki sites Phone Email Video Games Coworkers, Friends, Family Michael stated that he uses various types of music to help him block out these distractions in order for him to get into his coding zone. While he states that music works for him, he also notes that he knows of others that cannot really work with music. I have to say I am in the latter group because I require a quiet environment in order to work. A few session attendees also recommended listening to really loud white noise or music in another language other than your own. This allows for less focus to be placed on words being sung compared to the rhythmic beats being played. I have to say that I have not tried these suggestions yet but will in the near future. However, distractions can be very beneficial to productivity in that they give your mind a chance to relax and not think about the issues at hand. He spoke highly of taking vacations, and setting boundaries at work so that develops prevent the problem of burnout. One way he suggested that developer’s combat distractions is to use the Pomodoro technique. In his example he selects one task to do for 20 minutes and he can only do that task during that time. He ignores all other distractions until this task or time limit is complete. After it is completed he allows himself to relax and distract himself for another 5- 10 minutes before his next Pomodoro. This allows him to stay completely focused on a task and when the time is up he can then focus on other things.

    Read the article

  • Unable to install Konstruktor program

    - by George Barnick
    I've recently switched to Linux, having a good knowledge of how it works before switching, since I've had experience working with Ubuntu-based servers for several months. I've been trying to install a LEGO CAD program that I have found based on an open source library. Information on that is here. The program I'm looking at is called Konstruktor, and for the most part it should be working. One dependency however does not seem to be able to install. The missing dependency is "kdelibs5", which I know what it is and what it's for, but can't get it to install. When trying to install, I get this error: george@GEORGE-PC:~$ sudo apt-get install kdelibs5 Reading package lists... Done Building dependency tree Reading state information... Done Package kdelibs5 is not available, but is referred to by another package. This may mean that the package is missing, has been obsoleted, or is only available from another source E: Package 'kdelibs5' has no installation candidate When trying to install the proper package of Konstruktor for my system, I get this error: george@GEORGE-PC:~$ sudo dpkg -i '/home/george/Downloads/konstruktor_0.9-beta1-2_amd64.deb' Selecting previously unselected package konstruktor. (Reading database ... 224684 files and directories currently installed.) Unpacking konstruktor (from .../konstruktor_0.9-beta1-2_amd64.deb) ... dpkg: dependency problems prevent configuration of konstruktor: konstruktor depends on kdelibs5 (>= 4:4.4.5); however: Package kdelibs5 is not installed. dpkg: error processing konstruktor (--install): dependency problems - leaving unconfigured Processing triggers for bamfdaemon ... Rebuilding /usr/share/applications/bamf-2.index... Processing triggers for desktop-file-utils ... Processing triggers for gnome-menus ... Processing triggers for mime-support ... Processing triggers for hicolor-icon-theme ... Processing triggers for shared-mime-info ... Unknown media type in type 'all/all' Unknown media type in type 'all/allfiles' Unknown media type in type 'uri/mms' Unknown media type in type 'uri/mmst' Unknown media type in type 'uri/mmsu' Unknown media type in type 'uri/pnm' Unknown media type in type 'uri/rtspt' Unknown media type in type 'uri/rtspu' Errors were encountered while processing: konstruktor I have tried updating and adding apt repositories for KDE packages, but I continuously get the same error. I tried the package "kdelibs5-dev", which installed fine, but "kdelibs5" won't, and Konstruktor still won't install. I am on Ubuntu 13.10 with GNOME shell on amd64 architecture. Any help would be much appreciated. (originally posted my issue here) Thanks ahead of time.

    Read the article

  • Ranking Part III

    - by PointsToShare
    © 2011 By: Dov Trietsch. All rights reserved   Ranking Part III In a previous blogs “Ranking an Introduction” and  “Ranking Part II” , you have already praised me in “Rank the Author” and learned how to create a new element on a page and how to place it where you need it. For this installment, I just added code to keep the number of votes (you vote by clicking one of the stars) and the total vote. Using these two, we can compute the average rating. It’s a small step, but its purpose is to show that we do not need a detailed history in order to compute the average. A running total is sufficient. Please note that once you close the game, you will lose your previous total. In real life, we persist the totals in the list itself. We also keep a list of actual votes, but its purpose is to prevent double votes. If a person has already voted, his user id is already on the list and our program will check for it and bar the person from voting again. This is coded in an event receiver, which is a SharePoint server piece of code. I will show you how to do this part in a subsequent blog. Again, go to the page and look at the code. The gist of it is here. avg, votes, and stars are global variables that I defined before. function sendRate(sel){//I hate long line so I created pieces of the message in their own vars            var s1 = "Your Rating Was: ";            var s2 = ".. ";            var s3 = "\nVotes = ";            var s4 = "\nTotal Stars = ";            var s5 = "\nAverage = ";            var s;            s = parseInt(sel.id.replace("_", '')); // Get the selected star number            votes = parseInt(votes) + 1;            stars = parseInt(stars) + s;            avg = parseFloat(stars) / parseFloat(votes);            alert(s1 + sel.id + s2 +sel.title + s3 + votes + s4 + stars + s5 + avg);} Click on the link to play and examine “Ranking with Stats” That’s all folks!

    Read the article

< Previous Page | 568 569 570 571 572 573 574 575 576 577 578 579  | Next Page >