Search Results

Search found 9301 results on 373 pages for 'git filter branch'.

Page 144/373 | < Previous Page | 140 141 142 143 144 145 146 147 148 149 150 151  | Next Page >

  • A free standing ASP.NET Pager Web Control

    - by Rick Strahl
    Paging in ASP.NET has been relatively easy with stock controls supporting basic paging functionality. However, recently I built an MVC application and one of the things I ran into was that I HAD TO build manual paging support into a few of my pages. Dealing with list controls and rendering markup is easy enough, but doing paging is a little more involved. I ended up with a small but flexible component that can be dropped anywhere. As it turns out the task of creating a semi-generic Pager control for MVC was fairly easily. Now I’m back to working in Web Forms and thought to myself that the way I created the pager in MVC actually would also work in ASP.NET – in fact quite a bit easier since the whole thing can be conveniently wrapped up into an easily reusable control. A standalone pager would provider easier reuse in various pages and a more consistent pager display regardless of what kind of 'control’ the pager is associated with. Why a Pager Control? At first blush it might sound silly to create a new pager control – after all Web Forms has pretty decent paging support, doesn’t it? Well, sort of. Yes the GridView control has automatic paging built in and the ListView control has the related DataPager control. The built in ASP.NET paging has several issues though: Postback and JavaScript requirements If you look at paging links in ASP.NET they are always postback links with javascript:__doPostback() calls that go back to the server. While that works fine and actually has some benefit like the fact that paging saves changes to the page and post them back, it’s not very SEO friendly. Basically if you use javascript based navigation nosearch engine will follow the paging links which effectively cuts off list content on the first page. The DataPager control does support GET based links via the QueryStringParameter property, but the control is effectively tied to the ListView control (which is the only control that implements IPageableItemContainer). DataSource Controls required for Efficient Data Paging Retrieval The only way you can get paging to work efficiently where only the few records you display on the page are queried for and retrieved from the database you have to use a DataSource control - only the Linq and Entity DataSource controls  support this natively. While you can retrieve this data yourself manually, there’s no way to just assign the page number and render the pager based on this custom subset. Other than that default paging requires a full resultset for ASP.NET to filter the data and display only a subset which can be very resource intensive and wasteful if you’re dealing with largish resultsets (although I’m a firm believer in returning actually usable sets :-}). If you use your own business layer that doesn’t fit an ObjectDataSource you’re SOL. That’s a real shame too because with LINQ based querying it’s real easy to retrieve a subset of data that is just the data you want to display but the native Pager functionality doesn’t support just setting properties to display just the subset AFAIK. DataPager is not Free Standing The DataPager control is the closest thing to a decent Pager implementation that ASP.NET has, but alas it’s not a free standing component – it works off a related control and the only one that it effectively supports from the stock ASP.NET controls is the ListView control. This means you can’t use the same data pager formatting for a grid and a list view or vice versa and you’re always tied to the control. Paging Events In order to handle paging you have to deal with paging events. The events fire at specific time instances in the page pipeline and because of this you often have to handle data binding in a way to work around the paging events or else end up double binding your data sources based on paging. Yuk. Styling The GridView pager is a royal pain to beat into submission for styled rendering. The DataPager control has many more options and template layout and it renders somewhat cleaner, but it too is not exactly easy to get a decent display for. Not a Generic Solution The problem with the ASP.NET controls too is that it’s not generic. GridView, DataGrid use their own internal paging, ListView can use a DataPager and if you want to manually create data layout – well you’re on your own. IOW, depending on what you use you likely have very different looking Paging experiences. So, I figured I’ve struggled with this once too many and finally sat down and built a Pager control. The Pager Control My goal was to create a totally free standing control that has no dependencies on other controls and certainly no requirements for using DataSource controls. The idea is that you should be able to use this pager control without any sort of data requirements at all – you should just be able to set properties and be able to display a pager. The Pager control I ended up with has the following features: Completely free standing Pager control – no control or data dependencies Complete manual control – Pager can render without any data dependency Easy to use: Only need to set PageSize, ActivePage and TotalItems Supports optional filtering of IQueryable for efficient queries and Pager rendering Supports optional full set filtering of IEnumerable<T> and DataTable Page links are plain HTTP GET href Links Control automatically picks up Page links on the URL and assigns them (automatic page detection no page index changing events to hookup) Full CSS Styling support On the downside there’s no templating support for the control so the layout of the pager is relatively fixed. All elements however are stylable and there are options to control the text, and layout options such as whether to display first and last pages and the previous/next buttons and so on. To give you an idea what the pager looks like, here are two differently styled examples (all via CSS):   The markup for these two pagers looks like this: <ww:Pager runat="server" id="ItemPager" PageSize="5" PageLinkCssClass="gridpagerbutton" SelectedPageCssClass="gridpagerbutton-selected" PagesTextCssClass="gridpagertext" CssClass="gridpager" RenderContainerDiv="true" ContainerDivCssClass="gridpagercontainer" MaxPagesToDisplay="6" PagesText="Item Pages:" NextText="next" PreviousText="previous" /> <ww:Pager runat="server" id="ItemPager2" PageSize="5" RenderContainerDiv="true" MaxPagesToDisplay="6" /> The latter example uses default style settings so it there’s not much to set. The first example on the other hand explicitly assigns custom styles and overrides a few of the formatting options. Styling The styling is based on a number of CSS classes of which the the main pager, pagerbutton and pagerbutton-selected classes are the important ones. Other styles like pagerbutton-next/prev/first/last are based on the pagerbutton style. The default styling shown for the red outlined pager looks like this: .pagercontainer { margin: 20px 0; background: whitesmoke; padding: 5px; } .pager { float: right; font-size: 10pt; text-align: left; } .pagerbutton,.pagerbutton-selected,.pagertext { display: block; float: left; text-align: center; border: solid 2px maroon; min-width: 18px; margin-left: 3px; text-decoration: none; padding: 4px; } .pagerbutton-selected { font-size: 130%; font-weight: bold; color: maroon; border-width: 0px; background: khaki; } .pagerbutton-first { margin-right: 12px; } .pagerbutton-last,.pagerbutton-prev { margin-left: 12px; } .pagertext { border: none; margin-left: 30px; font-weight: bold; } .pagerbutton a { text-decoration: none; } .pagerbutton:hover { background-color: maroon; color: cornsilk; } .pagerbutton-prev { background-image: url(images/prev.png); background-position: 2px center; background-repeat: no-repeat; width: 35px; padding-left: 20px; } .pagerbutton-next { background-image: url(images/next.png); background-position: 40px center; background-repeat: no-repeat; width: 35px; padding-right: 20px; margin-right: 0px; } Yup that’s a lot of styling settings although not all of them are required. The key ones are pagerbutton, pager and pager selection. The others (which are implicitly created by the control based on the pagerbutton style) are for custom markup of the ‘special’ buttons. In my apps I tend to have two kinds of pages: Those that are associated with typical ‘grid’ displays that display purely tabular data and those that have a more looser list like layout. The two pagers shown above represent these two views and the pager and gridpager styles in my standard style sheet reflect these two styles. Configuring the Pager with Code Finally lets look at what it takes to hook up the pager. As mentioned in the highlights the Pager control is completely independent of other controls so if you just want to display a pager on its own it’s as simple as dropping the control and assigning the PageSize, ActivePage and either TotalPages or TotalItems. So for this markup: <ww:Pager runat="server" id="ItemPagerManual" PageSize="5" MaxPagesToDisplay="6" /> I can use code as simple as: ItemPagerManual.PageSize = 3; ItemPagerManual.ActivePage = 4;ItemPagerManual.TotalItems = 20; Note that ActivePage is not required - it will automatically use any Page=x query string value and assign it, although you can override it as I did above. TotalItems can be any value that you retrieve from a result set or manually assign as I did above. A more realistic scenario based on a LINQ to SQL IQueryable result is even easier. In this example, I have a UserControl that contains a ListView control that renders IQueryable data. I use a User Control here because there are different views the user can choose from with each view being a different user control. This incidentally also highlights one of the nice features of the pager: Because the pager is independent of the control I can put the pager on the host page instead of into each of the user controls. IOW, there’s only one Pager control, but there are potentially many user controls/listviews that hold the actual display data. The following code demonstrates how to use the Pager with an IQueryable that loads only the records it displays: protected voidPage_Load(objectsender, EventArgs e) {     Category = Request.Params["Category"] ?? string.Empty;     IQueryable<wws_Item> ItemList = ItemRepository.GetItemsByCategory(Category);     // Update the page and filter the list down     ItemList = ItemPager.FilterIQueryable<wws_Item>(ItemList); // Render user control with a list view Control ulItemList = LoadControl("~/usercontrols/" + App.Configuration.ItemListType + ".ascx"); ((IInventoryItemListControl)ulItemList).InventoryItemList = ItemList; phItemList.Controls.Add(ulItemList); // placeholder } The code uses a business object to retrieve Items by category as an IQueryable which means that the result is only an expression tree that hasn’t execute SQL yet and can be further filtered. I then pass this IQueryable to the FilterIQueryable() helper method of the control which does two main things: Filters the IQueryable to retrieve only the data displayed on the active page Sets the Totaltems property and calculates TotalPages on the Pager and that’s it! When the Pager renders it uses those values, plus the PageSize and ActivePage properties to render the Pager. In addition to IQueryable there are also filter methods for IEnumerable<T> and DataTable, but these versions just filter the data by removing rows/items from the entire already retrieved data. Output Generated and Paging Links The output generated creates pager links as plain href links. Here’s what the output looks like: <div id="ItemPager" class="pagercontainer"> <div class="pager"> <span class="pagertext">Pages: </span><a href="http://localhost/WestWindWebStore/itemlist.aspx?Page=1" class="pagerbutton" />1</a> <a href="http://localhost/WestWindWebStore/itemlist.aspx?Page=2" class="pagerbutton" />2</a> <a href="http://localhost/WestWindWebStore/itemlist.aspx?Page=3" class="pagerbutton" />3</a> <span class="pagerbutton-selected">4</span> <a href="http://localhost/WestWindWebStore/itemlist.aspx?Page=5" class="pagerbutton" />5</a> <a href="http://localhost/WestWindWebStore/itemlist.aspx?Page=6" class="pagerbutton" />6</a> <a href="http://localhost/WestWindWebStore/itemlist.aspx?Page=20" class="pagerbutton pagerbutton-last" />20</a>&nbsp;<a href="http://localhost/WestWindWebStore/itemlist.aspx?Page=3" class="pagerbutton pagerbutton-prev" />Prev</a>&nbsp;<a href="http://localhost/WestWindWebStore/itemlist.aspx?Page=5" class="pagerbutton pagerbutton-next" />Next</a></div> <br clear="all" /> </div> </div> The links point back to the current page and simply append a Page= page link into the page. When the page gets reloaded with the new page number the pager automatically detects the page number and automatically assigns the ActivePage property which results in the appropriate page to be displayed. The code shown in the previous section is all that’s needed to handle paging. Note that HTTP GET based paging is different than the Postback paging ASP.NET uses by default. Postback paging preserves modified page content when clicking on pager buttons, but this control will simply load a new page – no page preservation at this time. The advantage of not using Postback paging is that the URLs generated are plain HTML links that a search engine can follow where __doPostback() links are not. Pager with a Grid The pager also works in combination with grid controls so it’s easy to bypass the grid control’s paging features if desired. In the following example I use a gridView control and binds it to a DataTable result which is also filterable by the Pager control. The very basic plain vanilla ASP.NET grid markup looks like this: <div style="width: 600px; margin: 0 auto;padding: 20px; "> <asp:DataGrid runat="server" AutoGenerateColumns="True" ID="gdItems" CssClass="blackborder" style="width: 600px;"> <AlternatingItemStyle CssClass="gridalternate" /> <HeaderStyle CssClass="gridheader" /> </asp:DataGrid> <ww:Pager runat="server" ID="Pager" CssClass="gridpager" ContainerDivCssClass="gridpagercontainer" PageLinkCssClass="gridpagerbutton" SelectedPageCssClass="gridpagerbutton-selected" PageSize="8" RenderContainerDiv="true" MaxPagesToDisplay="6" /> </div> and looks like this when rendered: using custom set of CSS styles. The code behind for this code is also very simple: protected void Page_Load(object sender, EventArgs e) { string category = Request.Params["category"] ?? ""; busItem itemRep = WebStoreFactory.GetItem(); var items = itemRep.GetItemsByCategory(category) .Select(itm => new {Sku = itm.Sku, Description = itm.Description}); // run query into a DataTable for demonstration DataTable dt = itemRep.Converter.ToDataTable(items,"TItems"); // Remove all items not on the current page dt = Pager.FilterDataTable(dt,0); // bind and display gdItems.DataSource = dt; gdItems.DataBind(); } A little contrived I suppose since the list could already be bound from the list of elements, but this is to demonstrate that you can also bind against a DataTable if your business layer returns those. Unfortunately there’s no way to filter a DataReader as it’s a one way forward only reader and the reader is required by the DataSource to perform the bindings.  However, you can still use a DataReader as long as your business logic filters the data prior to rendering and provides a total item count (most likely as a second query). Control Creation The control itself is a pretty brute force ASP.NET control. Nothing clever about this other than some basic rendering logic and some simple calculations and update routines to determine which buttons need to be shown. You can take a look at the full code from the West Wind Web Toolkit’s Repository (note there are a few dependencies). To give you an idea how the control works here is the Render() method: /// <summary> /// overridden to handle custom pager rendering for runtime and design time /// </summary> /// <param name="writer"></param> protected override void Render(HtmlTextWriter writer) { base.Render(writer); if (TotalPages == 0 && TotalItems > 0) TotalPages = CalculateTotalPagesFromTotalItems(); if (DesignMode) TotalPages = 10; // don't render pager if there's only one page if (TotalPages < 2) return; if (RenderContainerDiv) { if (!string.IsNullOrEmpty(ContainerDivCssClass)) writer.AddAttribute("class", ContainerDivCssClass); writer.RenderBeginTag("div"); } // main pager wrapper writer.WriteBeginTag("div"); writer.AddAttribute("id", this.ClientID); if (!string.IsNullOrEmpty(CssClass)) writer.WriteAttribute("class", this.CssClass); writer.Write(HtmlTextWriter.TagRightChar + "\r\n"); // Pages Text writer.WriteBeginTag("span"); if (!string.IsNullOrEmpty(PagesTextCssClass)) writer.WriteAttribute("class", PagesTextCssClass); writer.Write(HtmlTextWriter.TagRightChar); writer.Write(this.PagesText); writer.WriteEndTag("span"); // if the base url is empty use the current URL FixupBaseUrl(); // set _startPage and _endPage ConfigurePagesToRender(); // write out first page link if (ShowFirstAndLastPageLinks && _startPage != 1) { writer.WriteBeginTag("a"); string pageUrl = StringUtils.SetUrlEncodedKey(BaseUrl, QueryStringPageField, (1).ToString()); writer.WriteAttribute("href", pageUrl); if (!string.IsNullOrEmpty(PageLinkCssClass)) writer.WriteAttribute("class", PageLinkCssClass + " " + PageLinkCssClass + "-first"); writer.Write(HtmlTextWriter.SelfClosingTagEnd); writer.Write("1"); writer.WriteEndTag("a"); writer.Write("&nbsp;"); } // write out all the page links for (int i = _startPage; i < _endPage + 1; i++) { if (i == ActivePage) { writer.WriteBeginTag("span"); if (!string.IsNullOrEmpty(SelectedPageCssClass)) writer.WriteAttribute("class", SelectedPageCssClass); writer.Write(HtmlTextWriter.TagRightChar); writer.Write(i.ToString()); writer.WriteEndTag("span"); } else { writer.WriteBeginTag("a"); string pageUrl = StringUtils.SetUrlEncodedKey(BaseUrl, QueryStringPageField, i.ToString()).TrimEnd('&'); writer.WriteAttribute("href", pageUrl); if (!string.IsNullOrEmpty(PageLinkCssClass)) writer.WriteAttribute("class", PageLinkCssClass); writer.Write(HtmlTextWriter.SelfClosingTagEnd); writer.Write(i.ToString()); writer.WriteEndTag("a"); } writer.Write("\r\n"); } // write out last page link if (ShowFirstAndLastPageLinks && _endPage < TotalPages) { writer.WriteBeginTag("a"); string pageUrl = StringUtils.SetUrlEncodedKey(BaseUrl, QueryStringPageField, TotalPages.ToString()); writer.WriteAttribute("href", pageUrl); if (!string.IsNullOrEmpty(PageLinkCssClass)) writer.WriteAttribute("class", PageLinkCssClass + " " + PageLinkCssClass + "-last"); writer.Write(HtmlTextWriter.SelfClosingTagEnd); writer.Write(TotalPages.ToString()); writer.WriteEndTag("a"); } // Previous link if (ShowPreviousNextLinks && !string.IsNullOrEmpty(PreviousText) && ActivePage > 1) { writer.Write("&nbsp;"); writer.WriteBeginTag("a"); string pageUrl = StringUtils.SetUrlEncodedKey(BaseUrl, QueryStringPageField, (ActivePage - 1).ToString()); writer.WriteAttribute("href", pageUrl); if (!string.IsNullOrEmpty(PageLinkCssClass)) writer.WriteAttribute("class", PageLinkCssClass + " " + PageLinkCssClass + "-prev"); writer.Write(HtmlTextWriter.SelfClosingTagEnd); writer.Write(PreviousText); writer.WriteEndTag("a"); } // Next link if (ShowPreviousNextLinks && !string.IsNullOrEmpty(NextText) && ActivePage < TotalPages) { writer.Write("&nbsp;"); writer.WriteBeginTag("a"); string pageUrl = StringUtils.SetUrlEncodedKey(BaseUrl, QueryStringPageField, (ActivePage + 1).ToString()); writer.WriteAttribute("href", pageUrl); if (!string.IsNullOrEmpty(PageLinkCssClass)) writer.WriteAttribute("class", PageLinkCssClass + " " + PageLinkCssClass + "-next"); writer.Write(HtmlTextWriter.SelfClosingTagEnd); writer.Write(NextText); writer.WriteEndTag("a"); } writer.WriteEndTag("div"); if (RenderContainerDiv) { if (RenderContainerDivBreak) writer.Write("<br clear=\"all\" />\r\n"); writer.WriteEndTag("div"); } } As I said pretty much brute force rendering based on the control’s property settings of which there are quite a few: You can also see the pager in the designer above. unfortunately the VS designer (both 2010 and 2008) fails to render the float: left CSS styles properly and starts wrapping after margins are applied in the special buttons. Not a big deal since VS does at least respect the spacing (the floated elements overlay). Then again I’m not using the designer anyway :-}. Filtering Data What makes the Pager easy to use is the filter methods built into the control. While this functionality is clearly not the most politically correct design choice as it violates separation of concerns, it’s very useful for typical pager operation. While I actually have filter methods that do something similar in my business layer, having it exposed on the control makes the control a lot more useful for typical databinding scenarios. Of course these methods are optional – if you have a business layer that can provide filtered page queries for you can use that instead and assign the TotalItems property manually. There are three filter method types available for IQueryable, IEnumerable and for DataTable which tend to be the most common use cases in my apps old and new. The IQueryable version is pretty simple as it can simply rely on on .Skip() and .Take() with LINQ: /// <summary> /// <summary> /// Queries the database for the ActivePage applied manually /// or from the Request["page"] variable. This routine /// figures out and sets TotalPages, ActivePage and /// returns a filtered subset IQueryable that contains /// only the items from the ActivePage. /// </summary> /// <param name="query"></param> /// <param name="activePage"> /// The page you want to display. Sets the ActivePage property when passed. /// Pass 0 or smaller to use ActivePage setting. /// </param> /// <returns></returns> public IQueryable<T> FilterIQueryable<T>(IQueryable<T> query, int activePage) where T : class, new() { ActivePage = activePage < 1 ? ActivePage : activePage; if (ActivePage < 1) ActivePage = 1; TotalItems = query.Count(); if (TotalItems <= PageSize) { ActivePage = 1; TotalPages = 1; return query; } int skip = ActivePage - 1; if (skip > 0) query = query.Skip(skip * PageSize); _TotalPages = CalculateTotalPagesFromTotalItems(); return query.Take(PageSize); } The IEnumerable<T> version simply  converts the IEnumerable to an IQuerable and calls back into this method for filtering. The DataTable version requires a little more work to manually parse and filter records (I didn’t want to add the Linq DataSetExtensions assembly just for this): /// <summary> /// Filters a data table for an ActivePage. /// /// Note: Modifies the data set permanently by remove DataRows /// </summary> /// <param name="dt">Full result DataTable</param> /// <param name="activePage">Page to display. 0 to use ActivePage property </param> /// <returns></returns> public DataTable FilterDataTable(DataTable dt, int activePage) { ActivePage = activePage < 1 ? ActivePage : activePage; if (ActivePage < 1) ActivePage = 1; TotalItems = dt.Rows.Count; if (TotalItems <= PageSize) { ActivePage = 1; TotalPages = 1; return dt; } int skip = ActivePage - 1; if (skip > 0) { for (int i = 0; i < skip * PageSize; i++ ) dt.Rows.RemoveAt(0); } while(dt.Rows.Count > PageSize) dt.Rows.RemoveAt(PageSize); return dt; } Using the Pager Control The pager as it is is a first cut I built a couple of weeks ago and since then have been tweaking a little as part of an internal project I’m working on. I’ve replaced a bunch of pagers on various older pages with this pager without any issues and have what now feels like a more consistent user interface where paging looks and feels the same across different controls. As a bonus I’m only loading the data from the database that I need to display a single page. With the preset class tags applied too adding a pager is now as easy as dropping the control and adding the style sheet for styling to be consistent – no fuss, no muss. Schweet. Hopefully some of you may find this as useful as I have or at least as a baseline to build ontop of… Resources The Pager is part of the West Wind Web & Ajax Toolkit Pager.cs Source Code (some toolkit dependencies) Westwind.css base stylesheet with .pager and .gridpager styles Pager Example Page © Rick Strahl, West Wind Technologies, 2005-2010Posted in ASP.NET  

    Read the article

  • Windows Server 2012 Branchcache vs. DFS-R

    - by TheCleaner
    Warning, subjective question ahead! But hopefully a good one that won't get closed. SCENARIO: I have a branch office that currently has no on-premise server. They access everything including a DC across a 12Mbps WAN link (MPLS). The link isn't saturated, averaging around 20% utilization. The circuit is very stable and has a high SLA and excellent uptime. However, large file transfers (mainly reads, not writes) from the file server across the WAN can be slow. We don't currently utilize DFS. RESEARCH DONE: I'm aware of WAN acceleration, using either dedicated hardware (Riverbed) or a dedicated software VM (Silver Peak) for example. But the pricing is outside of our current budget and the need isn't quite there yet from our perspective (since the issue is mainly in a "pull" scenario not necessarily push/pull). I'm mainly looking at deploying a Windows server at this branch office and either utilizing DFS-R or BranchCache. Looking at a table comparison and assuming we are looking at a "hosted branchcache server" and not simply distributed: It would appear there are benefits to both, even if both are "hosted" on a server. QUESTIONS I ACTUALLY HAVE: In what scenarios do each of these techs shine and where do you choose one over the other? Looking at a hosted Branchcache server, can you set "pre-fetching" of certain folders/files on the central file server so that they are immediately accessible locally at the branch? Do you have to do this on a schedule (if it is possible)? Looking at DFS-R my concern (and apparently solved with 3rd party apps) is file locking and making sure the file gets updated properly during a write operation (ie, making sure if both copies are accessed and both are written to, which file takes precedence and what happens to the changes?). Ideal it would seem would be to lock any alternate replicas of the data, but is it really that big of an issue? Does Branchcache lock the central file for editing? Does branchcache only transmit the deltas back to the central file of what has changed? Would either technology be ill advised if the branch office server was going to be utilized as a domain controller as well?

    Read the article

  • Which Linux distro for Mac Mini?

    - by spoon16
    I recently received a Mac Mini and would like to set it up as a web server and git source server. I would like to learn Linux so am interested in setting up my Mac Mini with Linux instead of OSX. Here are the main things that I will be using the Mac Mini for. git Repositories (via Gitosis) build server (build projects in git repositories using commit hooks and run tests) simple websites (PHP) learning C++ in a non-Windows environment What distribution would you recommend? Please provide some detail in your answer so that I can make an meaningful decision. Because I am looking to use the mini as more of a server than a normal desktop machine I was thinking of Ubuntu Server, I'm not sure if that is over kill though given the hardware I am using.

    Read the article

  • Which Linux distro for Mac Mini?

    - by spoon16
    I recently received a Mac Mini and would like to set it up as a web server and git source server. I would like to learn Linux so am interested in setting up my Mac Mini with Linux instead of OSX. Here are the main things that I will be using the Mac Mini for. git Repositories (via Gitosis) build server (build projects in git repositories using commit hooks and run tests) simple websites (PHP) learning C++ in a non-Windows environment What distribution would you recommend? Please provide some detail in your answer so that I can make an meaningful decision. Because I am looking to use the mini as more of a server than a normal desktop machine I was thinking of Ubuntu Server, I'm not sure if that is over kill though given the hardware I am using.

    Read the article

  • cannt run phpunit tests on bash ubuntu 11.10

    - by Mohamad Elbialy
    i'm working with ubuntu 11.10 as root on my local machine, i've installed xampp 1.7.7 and i'm a newbie to ubuntu, while following a tutorial on sitepoint(http://www.sitepoint.com/getting-started-with-pear/) on how to install pear to use PhpUnit, i didnt notice it then, but it seems that i installed or used an existing php version 5.3.6 in CL to do that, also the pear installation was built on this version, while xampp being installed,i now have two versions of php,xampp's 5.3.8 and the 5.3.6, anyway, what i want to do is to use the existing xampp php version and build pear on that, to make all my work through xampp.so my questions are: how to uninstall the php V5.3.6 and it's pear installation? how to link the CL with the php ver. of xampp? how to build the next pear installation on the php ver. of xampp? i want all my web dev. work through xampp, is there anything else i need to unistall, to avoid this confusion? 4. i did the following in attampet to solve the problem: i wrote this in bash: gedit ~/.bashrc i added that to the end of ~/.bashrc file in attempt to change environment path: export PATH=/opt/lampp/bin:$PATH export PATH=/opt/lampp/lib/php:$PATH export PATH=/opt/lampp/lib/php/PHPUnit/pearcmd.php:$PATH i checked the php and pear version using 'php -v' and 'pear list' i got an ouput of: PHP 5.3.8 (cli) (built: Sep 19 2011 13:29:27) Copyright (c) 1997-2011 The PHP Group Zend Engine v2.3.0, Copyright (c) 1998-2011 Zend Technologies and for pear: Installed packages, channel pear.php.net: ========================================= Package Version State Archive_Tar 1.3.9 stable Console_Getopt 1.3.1 stable PEAR 1.9.4 stable PHPUnit 1.3.2 stable Structures_Graph 1.0.4 stable XML_Util 1.2.1 stable when i run: 'phpunit MessageTest.php': i get PHP Warning: require_once(PHP/CodeCoverage/Filter.php): failed to open stream: No such file or directory in /usr/bin/phpunit on line 38 Warning: require_once(PHP/CodeCoverage/Filter.php): failed to open stream: No such file or directory in /usr/bin/phpunit on line 38 PHP Fatal error: require_once(): Failed opening required 'PHP/CodeCoverage/Filter.php' (include_path='.:/php/includes:/opt/lampp/lib/php:/opt/lampp/bin:/opt/lampp/lib/php/PEAR') in /usr/bin/phpunit on line 38 5.i ran the following commands as reported in other questions as a solution to that error: sudo apt-get remove phpunit sudo pear channel-discover pear.phpunit.de sudo pear channel-discover pear.symfony-project.com sudo pear channel-discover components.ez.no sudo pear update-channels sudo pear upgrade-all sudo pear install --alldeps phpunit/PHPUnit sudo apt-get install phpunit and updated include path of php.ini to be: include_path = ".:/php/includes:/opt/lampp/lib/php:/opt/lampp/bin:/opt/lampp/lib/php/PEAR" the php file MessageTest.php: <?php require 'PHPUnit/Autoload.php'; $path = '/opt/lampp/lib/php/PEAR'; set_include_path(get_include_path() . PATH_SEPARATOR . $path); require_once 'PHPUnit/Framework/TestCase.php'; require_once 'Message/Controller/MessageController.php'; class MessageTest extends PHPUnit_Framework_TestCase{ private $message; public function setUp() { $this->message = new MessageController(); } public function tearDown() { } public function testRepeat(){ $yell = "Hello, Any One Out There?"; $this->message->repeat($yell); //sending a request $returnedMessage = $this->message->repeat($yell);//get a response $this->assertEquals($returnedMessage, $yell); } } ?> MessageController class from MessageController.php that i'm trying to test <?php class MessageController { public function actionHelloWorld() { echo 'helloWorld'; } public function repeat($inputString){ return $inputString; } } $msg = new MessageController; ?> I'm not using any PHP framework, i just made the files and classes sounds like it that's all. and still i get the same error: PHP Warning: require_once(PHP/CodeCoverage/Filter.php): failed to open stream: No such file or directory in /usr/bin/phpunit on line Warning: require_once(PHP/CodeCoverage/Filter.php): failed to open stream: No such file or directory in /usr/bin/phpunit on line 38 PHP Fatal error: require_once(): Failed opening required 'PHP/CodeCoverage/Filter.php' (include_path='.:/php/includes:/opt/lampp/lib/php:/opt/lampp/bin:/opt/lampp/lib/php/PEAR') in /usr/bin/phpunit on line 38 sure, i'm getting demanding here, i've wasted a lot of time and got really frustrated over this, hope you guys dont get bored reading through my questions, i appreciate your help thanks in advance, Mohamad elbialy

    Read the article

  • bash completion with aliases

    - by dstarh
    I have a bunch of bash completion scripts set up (mostly using bash-it and some manually setup). I also have a bunch of aliases setup for common tasks like gco for git checkout. Right now I can type git checkout d tab and develop is completed for me but when I type gco d tab it does not complete. I'm assuming this is because the completion script is completing on git and it fails to see gco. Is there a way to generically/programmatically get all of my completion scripts to work with my aliases? Not being able to complete when using the alias kind of defeats the purpose of the alias.

    Read the article

  • Is there a way I can use $PATH as defined by my bash profile?

    - by Adam Backstrom
    I spend most of my day ssh'd into servers. I have a series of aliases/functions/scripts that allow me to type p hostname from the terminal and execute GNU screen(1) on the remote side, using the following command: exec ssh hostname -t 'screen -RD'` I've only recently noticed that ssh -t does not get my custom $PATH. Here's some terminal output: adam@workstation:~:0$ sh server 'echo $PATH' /home/adam/bin:/usr/local/bin:/bin:/usr/bin:/opt/git/bin:/opt/git/libexec/git-core adam@workstation:~:0$ ssh server -t 'echo $PATH' /usr/local/bin:/bin:/usr/bin Connection to uranus.plymouth.edu closed. My biggest problem is my custom aliases only try to execute screen, since I can't guarantee an absolute path, and my $PATH is structured so the shell should find the correct one. If my $PATH settings aren't honored, my scripts don't work. Is there a way I can use $PATH as defined by my .bashrc/.bash_profile? I believe PermitUserEnvironment is disabled.

    Read the article

  • homebrew in mac lion

    - by user975352
    I'm beginner of mac lion(10.7.2). I don't know well about mac but ubuntu. I installed homebrew to my mac, and I did command below. $ brew install git and then $ brew update error: Could not resolve host: github.com; nodename nor servname provided, or not known while accessing https://github.com/mxcl/homebrew.git/info/refs fatal: HTTP request failed Error: Failed while executing git pull origin refs/heads/master:refs/remotes/origin/master What's happen in my mac? How to resolve this? Would you help me?

    Read the article

  • SSH Port Forward 22

    - by j1199dm
    I'm trying to set up the following: At work I want to create a local port that will forward to port 22 on my home server. ssh -L 56879:home:22 username@home -p 443 right now I'm testing this on my two machines at home, my ubuntu server and the other my iMac. iMac: 192.168.1.104 ubuntu: 192.168.1.103 iMac - ssh -p 443 -L 56879:192.168.1.103:22 [email protected] in my ~/.ssh/config on my iMac I have port set to 56879. so when I do git pull remoteserver:/path/to/repo.git on my iMac git will use ssh client on my iMac and use port 56879 since setup in config which should forward to 22 on my ubuntu machine. I keep getting connection refused? Any ideas?

    Read the article

  • Route multiple subdomains on one external ip to multiple internal ips

    - by Abenil
    i have several subdomains(git.example.org, build.example.org, etc.), i have a router with an external ip and i have several virtual machines on a host computer with internal ips. Now i want to route git.example.org to internal ip 10.0.2.1 and build.example.org to internal ip 10.0.2.2. How can I do this? I setup in the Router that all traffic on port 80 is comming to my host computer with internal ip 10.0.2.3 and installed Squid on that computer. I added the following lines to the squid.conf file: cache_peer 10.0.2.1 parent 80 0 no-query originserver name=server_1 cache_peer_domain server_1 git.example.org cache_peer 10.0.2.2 parent 80 0 no-query originserver name=server_2 cache_peer_domain server_2 build.example.org But this is not working for me. :( Any help appreciated. Regards Nils Update: Here is the solution for Apache http://serverfault.com/a/273693

    Read the article

  • How to remove a package I compiled and installed manually?

    - by macek
    I recently compiled and installed Git on a new install of Mac OS 10.6 but it didn't install the documentation. I now realize I should've used the precompiled package offered here: http://code.google.com/p/git-osx-installer/downloads/list How do I remove all the files that I added to my system using make install with the Git source code? Edit: I've had similar problems in the past with other packages, too. For example, ./configure with the incorrect --prefix= or something. What's the general practice for removing unix packages?

    Read the article

  • Connect Cisco SBCS with Nortel BCM (H.323)

    - by D4
    Hi, I am wondering if its possible to connect a Cisco Unified Communications 500 (at a branch office) to Our main Site´s Nortel BCM as a remote Gateway for VOIP Comunication... I´ve had a little experience with branch office telephony but only with Nortel Techonology. don´t know if the Cisco SBCS will play along the other kids. :P thanks in advance... Any thought is more than welcome..

    Read the article

  • Updating wordpress in a multi-node environment

    - by Peter
    I'm finding this very tricky in a multi node environment, with code under revision control. AKA. multiple frontends and single database. I have a deployment process that pushes a git repo to the servers, but obviously if I update Wordpress from within the admin panel, it will update the files to one FE. Then I would need to copy over the new files to the other FE nodes. Plus, whenever these changes are written when Wordpress updates on a node, it writes code into the git repo. As such, it then breaks the auto deploys that perform 'git pulls', as it then has untracked changes and refuses to pull in new deploys unless manually intervened. How does one easily keep Wordpress updated in a multi node (load balanced) environment?

    Read the article

  • nginx deny directory and files to be downloaded

    - by YeppThat'sMe
    gurus. I have a problem and i dont know how to solve it. I am working with Git and Compass/SASS on some projects. Now i want to protect those directories. When i go only to the folder its all fine – i get what i expected a 403 forbidden. location ~ /\.git { deny all; } But when i try use the full path to the config file from git the browser start to download it. Same scenario with compass. There is a config.rb file within the folder which also starts to download it. How can i prevent this behaviour? How can i deny downloading specific files?

    Read the article

  • user related commands hang on open("/etc/localtime", O_RDONLY) = 4 in CentOS 5.5

    - by fuzzy lollipop
    I am logged in as root when doing a strace -etrace=open adduser git it hangs on open("/etc/localtime", O_RDONLY) = 4 for like 2 minutes then continues on. Also when I try and strace -etrace=open su git it just hangs at the same place as well, I can't login via ssh as the git user either. Some other users I created work just fine, like su tomcat and I can ssh in as tomcat as well. I deleted the file that was at /etc/localtime and replaced it with a symlink to ln -s /usr/share/zoneinfo/US/Eastern /etc/localtime and it didn't change the behavior in any way.

    Read the article

  • Linux periodically "losing" ability to connect to server via SSH?

    - by gct
    I know this isn't exactly a programming question, but it popped up in my use of git for programming projects at least. I've got a web server that I use to host my git repos on, but my ubuntu box seems to "lose" the ability to connect to it via SSH. I'll get a "connection refused" error when I try to ssh or use git. Rebooting my local machine will fix the problem, but only temporarily. I can still connect to the web interface just fine, and the problem manifests with other servers as well. I've been working around it by pulling my changes over to my laptop and pushing from there, but that's sub-optimal as you can imagine. Has anyone seen something like this? I'd be tempted to say it's some kind of IP caching problem, but I can't connect even using the IP address of the server directly... Running Ubuntu 9.04

    Read the article

  • How can I get bash to perform tab-completion for my aliases?

    - by dstarh
    I have a bunch of bash completion scripts set up (mostly using bash-it and some manually setup). I also have a bunch of aliases setup for common tasks like gco for git checkout. Right now I can type git checkout dTab and develop is completed for me but when I type gco dTab it does not complete. I'm assuming this is because the completion script is completing on git and it fails to see gco. Is there a way to generically/programmatically get all of my completion scripts to work with my aliases? Not being able to complete when using the alias kind of defeats the purpose of the alias.

    Read the article

  • graphical svn client for creating branches, merging branches etc?

    - by ajsie
    hi i wonder if there are some GUI softwares to administrate a svn repo? or do you actually have to log into the ubuntu server with ssh and use all the svn commands to copy the trunk to a branch, merge the data back and forth, copy to a tag, delete and so on... im using netbeans in mac. i think it's only handling the communication between a local project and the repo. not the flows between trunc, branch and tag (creating, deleting, viewing differences etc)

    Read the article

  • Is it possible to get the matched regex from within the apache LocationMatch directive?

    - by tftd
    I'm wondering if the following code should be working: <LocationMatch "/(.*)([/])?(.*)"> Order allow,deny Allow from all AuthType Basic AuthName "Git" AuthUserFile /git/.htpasswd AuthGroupFile /git/.htgroup Require group $1 </LocationMatch> What I am trying to achieve with this is to require a group based on the first regex variable. So if the user goes to http://localhost/a-repository-name he has to be in the group a-repository-name to be able to open the url. For some reason I can't get this code working and apache returns: Authorization of user **** to access /a-repository-name failed, reason: user is not part of the 'require'ed group(s). I guess it's not matching against the proper variable at Require group $1. Is this the right way to be done or I'm missing something?

    Read the article

  • Authlogic OpenID integration

    - by Craig
    I'm having difficulty getting OpenId authentication working with Authlogic. It appears that the problem arose with changes to the open_id_authentication plugin. From what I've read so far, one needs to switch from using gems to using plugins. Here's what I done thus far to get Authlogic-OpenID integration working: Removed relevant gems: authlogic authlogic-oid rack-openid ruby-openid * Installed, configured, and started the authlogic sample application (http://github.com/binarylogic/authlogic_example)--works as expected. This required: installing the authlogic (2.1.3) gem ($ sudo gem install authlogic) adding a dependency (config.gem "authlogic") to the environment.rb file. added migration to add open-id support to User model; ran migration; columns added as expected made changes to the UsersController and UserSessionsController to use blocks to save each. made changes to new user-sessions view to support open id (f.text_field :openid_identifier) installed open_id_authentication plugin ($ script/plugin install git://github.com/rails/open_id_authentication.git) installed the authlogic-oid plugin ($ script/plugin install git://github.com/binarylogic/authlogic_openid.git) installed the plugin ($ script/plugin install git://github.com/glebm/ruby-openid.git) restarted mongrel (CTRL-C; $ script/server) Mogrel failed to start, returning the following error: /Library/Ruby/Site/1.8/rubygems/custom_require.rb:31:in `gem_original_require': no such file to load -- rack/openid (MissingSourceFile) from /Library/Ruby/Site/1.8/rubygems/custom_require.rb:31:in `require' from /Users/craibuc/NetBeansProjects/authlogic_example/vendor/rails/activesupport/lib/active_support/dependencies.rb:156:in `require' from /Users/craibuc/NetBeansProjects/authlogic_example/vendor/rails/activesupport/lib/active_support/dependencies.rb:521:in `new_constants_in' from /Users/craibuc/NetBeansProjects/authlogic_example/vendor/rails/activesupport/lib/active_support/dependencies.rb:156:in `require' from /Users/craibuc/NetBeansProjects/authlogic_example/vendor/plugins/open_id_authentication/lib/open_id_authentication.rb:3 from /Library/Ruby/Site/1.8/rubygems/custom_require.rb:31:in `gem_original_require' from /Library/Ruby/Site/1.8/rubygems/custom_require.rb:31:in `require' from /Users/craibuc/NetBeansProjects/authlogic_example/vendor/rails/activesupport/lib/active_support/dependencies.rb:156:in `require' from /Users/craibuc/NetBeansProjects/authlogic_example/vendor/rails/activesupport/lib/active_support/dependencies.rb:521:in `new_constants_in' from /Users/craibuc/NetBeansProjects/authlogic_example/vendor/rails/activesupport/lib/active_support/dependencies.rb:156:in `require' from /Users/craibuc/NetBeansProjects/authlogic_example/vendor/plugins/open_id_authentication/init.rb:5:in `evaluate_init_rb' from ./script/../config/../vendor/rails/railties/lib/rails/plugin.rb:146:in `evaluate_init_rb' from /Users/craibuc/NetBeansProjects/authlogic_example/vendor/rails/activesupport/lib/active_support/core_ext/kernel/reporting.rb:11:in `silence_warnings' from ./script/../config/../vendor/rails/railties/lib/rails/plugin.rb:142:in `evaluate_init_rb' from ./script/../config/../vendor/rails/railties/lib/rails/plugin.rb:48:in `load' from ./script/../config/../vendor/rails/railties/lib/rails/plugin/loader.rb:38:in `load_plugins' from ./script/../config/../vendor/rails/railties/lib/rails/plugin/loader.rb:37:in `each' from ./script/../config/../vendor/rails/railties/lib/rails/plugin/loader.rb:37:in `load_plugins' from ./script/../config/../vendor/rails/railties/lib/initializer.rb:348:in `load_plugins' from ./script/../config/../vendor/rails/railties/lib/initializer.rb:163:in `process' from ./script/../config/../vendor/rails/railties/lib/initializer.rb:113:in `send' from ./script/../config/../vendor/rails/railties/lib/initializer.rb:113:in `run' from /Users/craibuc/NetBeansProjects/authlogic_example/config/environment.rb:13 from /Library/Ruby/Site/1.8/rubygems/custom_require.rb:31:in `gem_original_require' from /Library/Ruby/Site/1.8/rubygems/custom_require.rb:31:in `require' from /Users/craibuc/NetBeansProjects/authlogic_example/vendor/rails/activesupport/lib/active_support/dependencies.rb:156:in `require' from /Users/craibuc/NetBeansProjects/authlogic_example/vendor/rails/activesupport/lib/active_support/dependencies.rb:521:in `new_constants_in' from /Users/craibuc/NetBeansProjects/authlogic_example/vendor/rails/activesupport/lib/active_support/dependencies.rb:156:in `require' from /Users/craibuc/NetBeansProjects/authlogic_example/vendor/rails/railties/lib/commands/server.rb:84 from /Library/Ruby/Site/1.8/rubygems/custom_require.rb:31:in `gem_original_require' from /Library/Ruby/Site/1.8/rubygems/custom_require.rb:31:in `require' from script/server:3 I suspect this is related the rack-openid gem, but as it was dependent upon the ruby-openid gem, it was removed when the ruby-openid gem was removed. Perhaps this can be installed as a plugin. Any assistance with this matter is greatly appreciated--I'm just about to give up on OpenId integration. * ruby-openid (2.1.2) is installed at /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/lib/ruby/gems/1.8. I'm not certain if this is affecting anything. In any case, I'm not sure how to uninstall it or if I should. ** edit ** It appears that there are a number of gems in the /Library/Ruby/Gems/1.8/gems directory that may be causing an issue: authlogic-oid (1.0.4) rack-openid (1.0.3) ruby-openid (2.1.7) Questions: - why doesn't the gem list command list these gems? - Why doesn't the gem uninstall command remove these gems?

    Read the article

  • Productive Toolset for C# Developer

    - by Marko Apfel
    Programming Visual Studio ReSharper Agent Johnson Agent Smith StyleCop for ReSharper Keymaps SettingsManager Git Source Control Provider Gist NuGet Package Manager NDepend Productivity Power Tools PowerCommands for Visual Studio PostSharp Indent Guides Typemock Isolator VSCommands Ressource Refactor Clone Detective GhostDoc CR_Documentor AnkSVN Expression Blend SharpDevelop Notepad++, PS Pad StyleCop, FxCop, .. .NET Reflector, ILSpy, dotPeek, Just Decompile Git Extensions inkl. MSysGit, MinGW Github for Windows SmartGit PoSH-Git Console Enhancement Project LINQPad Mercurial RapidSVN SQL Management Studio Adventure Works Sample DB AdventureWorksLT Toad for SQL Server yEd Graph Editor TeX, LateX MiKTeX, TeXworks Pandoc Jenkins, TeamCity KompoZer XML Notepad Kaxaml KDiff3, WinMerge, Perforce Merge Handle DbgView FusLogVw FTP Commander HTML Help Workshop, Sandcastle, SHFB WiX Enterprise Architect InsightProfiler Putty Cygwin DXCore, DXCore Plugins FreeMind ProcessExplorer, ProcessMonitor Social Networking, Community Windows Live Writer Disgsby Skype TweetDeck FeedReader Sytem and others Microsoft Office (notably OneNote!!!) Adobe Reader PDF Creator SRWare Iron (Chrome) AddThis bit-ly del.icio.us InstaPaper Leo Dictionary Google Bookmarks Proxy Switchy! StumbleUpon K-Meleon FreeCommander, FAR 7-Zip Keyboard Jedi Launchy TrueCrypt Dropbox Ditto Greenshot Rainlendar2 Everything Daemon Tools inSSIDer VirtualBox Stardock Fences Media Player Classic VLC Media Player Winamp WinAmp Cue Player LAME Encoder CamStudio Youtube to MP3 Converter VirtualDub Image Resizer Powertoy Clone 2.0 Paint.NET Picasa Windy JediConcentrate, Ghoster TeamViewer Timerle TreeSizeFree WinDirStat Windows Sizer, WinResizer ZoomIt Sometimes nice to have ArcGIS TortoiseSVN, TortoiseCVS XnView GitJungle CowSpy Grindstone Free Download Manager CDBurnerXP Free Audio CD Burner SmartAssembly intellibook GMX SMS Manager BlackBerry Desktop Cisco Any Connect eRoom Foxit Reader Google Earth ThinkVantage GPS Gridy Bluefish The GodFather Tor Browser, Charon YouTube Downloader NCover Network Stumbler Remote Debugger WScite XML Pad DBVisualizer Microsoft Network Monitor, Fiddler2 Eclipse IDE Oracle Client, Oracle SQL Developer Bookmarks, Links http://pastebin.de/, http://pastebin.com/ http://followup.cc  http://trello.com http://tumblr.com https://bitly.com/, http://is.gd http://www.famkruithof.net/uuid/uuidgen, http://www.guidgenerator.com/ https://github.com/, https://bitbucket.org/ http://dict.leo.org/, http://translate.google.com/ http://prezi.com/ http://geekswithblogs.net/Default.aspx, http://codebetter.com/ http://duckduckgo.com/bang.html   http://de.schreibtrainer.com/index.php?site=3&menuId=3 http://www.mr-wetter.de/ this is an update to http://geekswithblogs.net/mapfel/archive/2010/07/12/140877.aspx

    Read the article

  • Examples of temporal database designs? [closed]

    - by miku
    I'm researching various database design for historical record keeping - because I would like to implement a prototypical web application that (excessively) tracks changes made by users and let them undo things, see revisions, etc. I'd love use mercurial or git as backend (with files as records) - since they already implement the kind of append-only changes I imagine. I tried git and dulwich (python git API) - and it went ok - but I was concerned about performance; Bi-temporal database design lets you store a row along with time periods, when this record was valid. This sure sound more performant than to operate on files on disk (as in 1.) - but I had little luck finding real-world examples (e.g. from open source projects), that use this kind of design and are approachable enough to learn from them. Revisions à la MediaWiki revisions or an extra table for versions, as in Redmine. The problem here is, that DELETE would take the whole history with it. I looked at NoSQL solutions, too. With a document oriented approach, it would be simple to just store the whole history of an entity within the document itself - which would reduce design plus implementation time in contrast to a RDBMS approach. However, in this case I'm a bit concerned about ACID-properties, which would be important in the application. I'd like ask about experiences about real-world and pragmatic designs for temporal data.

    Read the article

  • Conheça a nova Windows Azure

    - by Leniel Macaferi
    Hoje estamos lançando um grande conjunto de melhorias para a Windows Azure. A seguir está um breve resumo de apenas algumas destas melhorias: Novo Portal de Administração e Ferramentas de Linha de Comando O lançamento de hoje vem com um novo portal para a Windows Azure, o qual lhe permitirá gerenciar todos os recursos e serviços oferecidos na Windows Azure de uma forma perfeitamente integrada. O portal é muito rápido e fluido, suporta filtragem e classificação dos dados (o que o torna muito fácil de usar em implantações/instalações de grande porte), funciona em todos os navegadores, e oferece um monte de ótimos e novos recursos - incluindo suporte nativo à VM (máquina virtual), Web site, Storage (armazenamento), e monitoramento de Serviços hospedados na Nuvem. O novo portal é construído em cima de uma API de gerenciamento baseada no modelo REST dentro da Windows Azure - e tudo o que você pode fazer através do portal também pode ser feito através de programação acessando esta Web API. Também estamos lançando hoje ferramentas de linha de comando (que, igualmente ao portal, chamam as APIs de Gerenciamento REST) para tornar ainda ainda mais fácil a criação de scripts e a automatização de suas tarefas de administração. Estamos oferecendo para download um conjunto de ferramentas para o Powershell (Windows) e Bash (Mac e Linux). Como nossos SDKs, o código destas ferramentas está hospedado no GitHub sob uma licença Apache 2. Máquinas Virtuais ( Virtual Machines [ VM ] ) A Windows Azure agora suporta a capacidade de implantar e executar VMs duráveis/permanentes ??na nuvem. Você pode criar facilmente essas VMs usando uma nova Galeria de Imagens embutida no novo Portal da Windows Azure ou, alternativamente, você pode fazer o upload e executar suas próprias imagens VHD customizadas. Máquinas virtuais são duráveis ??(o que significa que qualquer coisa que você instalar dentro delas persistirá entre as reinicializações) e você pode usar qualquer sistema operacional nelas. Nossa galeria de imagens nativa inclui imagens do Windows Server (incluindo o novo Windows Server 2012 RC), bem como imagens do Linux (incluindo Ubuntu, CentOS, e as distribuições SUSE). Depois de criar uma instância de uma VM você pode facilmente usar o Terminal Server ou SSH para acessá-las a fim de configurar e personalizar a máquina virtual da maneira como você quiser (e, opcionalmente, capturar uma snapshot (cópia instantânea da imagem atual) para usar ao criar novas instâncias de VMs). Isto te proporciona a flexibilidade de executar praticamente qualquer carga de trabalho dentro da plataforma Windows Azure.   A novo Portal da Windows Azure fornece um rico conjunto de recursos para o gerenciamento de Máquinas Virtuais - incluindo a capacidade de monitorar e controlar a utilização dos recursos dentro delas.  Nosso novo suporte à Máquinas Virtuais também permite a capacidade de facilmente conectar múltiplos discos nas VMs (os quais você pode então montar e formatar como unidades de disco). Opcionalmente, você pode ativar o suporte à replicação geográfica (geo-replication) para estes discos - o que fará com que a Windows Azure continuamente replique o seu armazenamento em um data center secundário (criando um backup), localizado a pelo menos 640 quilômetros de distância do seu data-center principal. Nós usamos o mesmo formato VHD que é suportado com a virtualização do Windows hoje (o qual nós lançamos como uma especificação aberta), de modo a permitir que você facilmente migre cargas de trabalho existentes que você já tenha virtualizado na Windows Azure.  Também tornamos fácil fazer o download de VHDs da Windows Azure, o que também oferece a flexibilidade para facilmente migrar cargas de trabalho das VMs baseadas na nuvem para um ambiente local. Tudo o que você precisa fazer é baixar o arquivo VHD e inicializá-lo localmente - nenhuma etapa de importação/exportação é necessária. Web Sites A Windows Azure agora suporta a capacidade de rapidamente e facilmente implantar web-sites ASP.NET, Node.js e PHP em um ambiente na nuvem altamente escalável que te permite começar pequeno (e de maneira gratuita) de modo que você possa em seguida, adaptar/escalar sua aplicação de acordo com o crescimento do seu tráfego. Você pode criar um novo web site na Azure e tê-lo pronto para implantação em menos de 10 segundos: O novo Portal da Windows Azure oferece suporte integrado para a administração de Web sites, incluindo a capacidade de monitorar e acompanhar a utilização dos recursos em tempo real: Você pode fazer o deploy (implantação) para web-sites em segundos usando FTP, Git, TFS e Web Deploy. Também estamos lançando atualizações para as ferramentas do Visual Studio e da Web Matrix que permitem aos desenvolvedores uma fácil instalação das aplicações ASP.NET nesta nova oferta. O suporte de publicação do VS e da Web Matrix inclui a capacidade de implantar bancos de dados SQL como parte da implantação do site - bem como a capacidade de realizar a atualização incremental do esquema do banco de dados com uma implantação realizada posteriormente. Você pode integrar a publicação de aplicações web com o controle de código fonte ao selecionar os links "Set up TFS publishing" (Configurar publicação TFS) ou "Set up Git publishing" (Configurar publicação Git) que estão presentes no dashboard de um web-site: Ao fazer isso, você habilitará a integração com o nosso novo serviço online TFS (que permite um fluxo de trabalho do TFS completo - incluindo um build elástico e suporte a testes), ou você pode criar um repositório Git e referenciá-lo como um remote para executar implantações automáticas. Uma vez que você executar uma implantação usando TFS ou Git, a tab/guia de implantações/instalações irá acompanhar as implantações que você fizer, e permitirá que você selecione uma implantação mais antiga (ou mais recente) para que você possa rapidamente voltar o seu site para um estado anterior do seu código. Isso proporciona uma experiência de fluxo de trabalho muito poderosa.   A Windows Azure agora permite que você implante até 10 web-sites em um ambiente de hospedagem gratuito e compartilhado entre múltiplos usuários e bancos de dados (onde um site que você implantar será um dos vários sites rodando em um conjunto compartilhado de recursos do servidor). Isso te fornece uma maneira fácil para começar a desenvolver projetos sem nenhum custo envolvido. Você pode, opcionalmente, fazer o upgrade do seus sites para que os mesmos sejam executados em um "modo reservado" que os isola, de modo que você seja o único cliente dentro de uma máquina virtual: E você pode adaptar elasticamente a quantidade de recursos que os seus sites utilizam - o que te permite por exemplo aumentar a capacidade da sua instância reservada/particular de acordo com o aumento do seu tráfego: A Windows Azure controla automaticamente o balanceamento de carga do tráfego entre as instâncias das VMs, e você tem as mesmas opções de implantação super rápidas (FTP, Git, TFS e Web Deploy), independentemente de quantas instâncias reservadas você usar. Com a Windows Azure você paga por capacidade de processamento por hora - o que te permite dimensionar para cima e para baixo seus recursos para atender apenas o que você precisa. Serviços da Nuvem (Cloud Services) e Cache Distribuído (Distributed Caching) A Windows Azure também suporta a capacidade de construir serviços que rodam na nuvem que suportam ricas arquiteturas multicamadas, gerenciamento automatizado de aplicações, e que podem ser adaptados para implantações extremamente grandes. Anteriormente nós nos referíamos a esta capacidade como "serviços hospedados" - com o lançamento desta semana estamos agora rebatizando esta capacidade como "serviços da nuvem". Nós também estamos permitindo um monte de novos recursos com eles. Cache Distribuído Um dos novos recursos muito legais que estão sendo habilitados com os serviços da nuvem é uma nova capacidade de cache distribuído que te permite usar e configurar um cache distribuído de baixa latência, armazenado na memória (in-memory) dentro de suas aplicações. Esse cache é isolado para uso apenas por suas aplicações, e não possui limites de corte. Esse cache pode crescer e diminuir dinamicamente e elasticamente (sem que você tenha que reimplantar a sua aplicação ou fazer alterações no código), e suporta toda a riqueza da API do Servidor de Cache AppFabric (incluindo regiões, alta disponibilidade, notificações, cache local e muito mais). Além de suportar a API do Servidor de Cache AppFabric, esta nova capacidade de cache pode agora também suportar o protocolo Memcached - o que te permite apontar código escrito para o Memcached para o cache distribuído (sem que alterações de código sejam necessárias). O novo cache distribuído pode ser configurado para ser executado em uma de duas maneiras: 1) Utilizando uma abordagem de cache co-localizado (co-located). Nesta opção você aloca um percentual de memória dos seus roles web e worker existentes para que o mesmo seja usado ??pelo cache, e então o cache junta a memória em um grande cache distribuído.  Qualquer dado colocado no cache por uma instância do role pode ser acessado por outras instâncias do role em sua aplicação - independentemente de os dados cacheados estarem armazenados neste ou em outro role. O grande benefício da opção de cache "co-localizado" é que ele é gratuito (você não precisa pagar nada para ativá-lo) e ele te permite usar o que poderia ser de outra forma memória não utilizada dentro das VMs da sua aplicação. 2) Alternativamente, você pode adicionar "cache worker roles" no seu serviço na nuvem que são utilizados unicamente para o cache. Estes também serão unidos em um grande anel de cache distribuído que outros roles dentro da sua aplicação podem acessar. Você pode usar esses roles para cachear dezenas ou centenas de GBs de dados na memória de forma extramente eficaz - e o cache pode ser aumentado ou diminuído elasticamente durante o tempo de execução dentro da sua aplicação: Novos SDKs e Ferramentas de Suporte Nós atualizamos todos os SDKs (kits para desenvolvimento de software) da Windows Azure com o lançamento de hoje para incluir novos recursos e capacidades. Nossos SDKs estão agora disponíveis em vários idiomas, e todo o código fonte deles está publicado sob uma licença Apache 2 e é mantido em repositórios no GitHub. O SDK .NET para Azure tem em particular um monte de grandes melhorias com o lançamento de hoje, e agora inclui suporte para ferramentas, tanto para o VS 2010 quanto para o VS 2012 RC. Estamos agora também entregando downloads do SDK para Windows, Mac e Linux nos idiomas que são oferecidos em todos esses sistemas - de modo a permitir que os desenvolvedores possam criar aplicações Windows Azure usando qualquer sistema operacional durante o desenvolvimento. Muito, Muito Mais O resumo acima é apenas uma pequena lista de algumas das melhorias que estão sendo entregues de uma forma preliminar ou definitiva hoje - há muito mais incluído no lançamento de hoje. Dentre estas melhorias posso citar novas capacidades para Virtual Private Networking (Redes Privadas Virtuais), novo runtime do Service Bus e respectivas ferramentas de suporte, o preview público dos novos Azure Media Services, novos Data Centers, upgrade significante para o hardware de armazenamento e rede, SQL Reporting Services, novos recursos de Identidade, suporte para mais de 40 novos países e territórios, e muito, muito mais. Você pode aprender mais sobre a Windows Azure e se cadastrar para experimentá-la gratuitamente em http://windowsazure.com.  Você também pode assistir a uma apresentação ao vivo que estarei realizando às 1pm PDT (17:00Hs de Brasília), hoje 7 de Junho (hoje mais tarde), onde eu vou passar por todos os novos recursos. Estaremos abrindo as novas funcionalidades as quais me referi acima para uso público poucas horas após o término da apresentação. Nós estamos realmente animados para ver as grandes aplicações que você construirá com estes novos recursos. Espero que ajude, - Scott   Texto traduzido do post original por Leniel Macaferi.

    Read the article

  • Announcing Unbreakable Enterprise Kernel Release 3 for Oracle Linux

    - by Lenz Grimmer
    We are excited to announce the general availability of the Unbreakable Enterprise Kernel Release 3 for Oracle Linux 6. The Unbreakable Enterprise Kernel Release 3 (UEK R3) is Oracle's third major supported release of its heavily tested and optimized Linux kernel for Oracle Linux 6 on the x86_64 architecture. UEK R3 is based on mainline Linux version 3.8.13. Some notable highlights of this release include: Inclusion of DTrace for Linux into the kernel (no longer a separate kernel image). DTrace for Linux now supports probes for user-space statically defined tracing (USDT) in programs that have been modified to include embedded static probe points Production support for Linux containers (LXC) which were previously released as a technology preview Btrfs file system improvements (subvolume-aware quota groups, cross-subvolume reflinks, btrfs send/receive to transfer file system snapshots or incremental differences, file hole punching, hot-replacing of failed disk devices, device statistics) Improved support for Control Groups (cgroups)  The ext4 file system can now store the content of a small file inside the inode (inline_data) TCP fast open (TFO) can speed up the opening of successive TCP connections between two endpoints FUSE file system performance improvements on NUMA systems Support for the Intel Ivy Bridge (IVB) processor family Integration of the OpenFabrics Enterprise Distribution (OFED) 2.0 stack, supporting a wide range of Infinband protocols including updates to Oracle's Reliable Datagram Sockets (RDS) Numerous driver updates in close coordination with our hardware partners UEK R3 uses the same versioning model as the mainline Linux kernel version. Unlike in UEK R2 (which identifies itself as version "2.6.39", even though it is based on mainline Linux 3.0.x), "uname" returns the actual version number (3.8.13). For further details on the new features, changes and any known issues, please consult the Release Notes. The Unbreakable Enterprise Kernel Release 3 and related packages can be installed using the yum package management tool on Oracle Linux 6 Update 4 or newer, both from the Unbreakable Linux Network (ULN) and our public yum server. Please follow the installation instructions in the Release Notes for a detailed description of the steps involved. The kernel source tree will also available via the git source code revision control system from https://oss.oracle.com/git/?p=linux-uek3-3.8.git If you would like to discuss your experiences with Oracle Linux and UEK R3, we look forward to your feedback on our public Oracle Linux Forum.

    Read the article

  • Connection timeout when accessing Github

    - by Felipe Micaroni Lalli
    I have exactly the same problem as described here: http://stackoverflow.com/questions/12849986/connection-timeout-when-accessing-github So I'll just copy & paste: I have some weird problems. When I try to log in my Github account, I get a "net::ERR_EMPTY_RESPONSE" error. I tried with Chrome, Firefox and Opera. In Firefox, if a clean the cache and offline data, it works for a while. Then I can log in, but I still can't create a Github repository, even if I clear the cache again. My friend, in the same network, with Windows, can do whatever he wants on Github's web site, but I can't. I tried many DNS servers, I tried not to set it (my friend doesn't), but it's still not working. My OS: Ubuntu x64 12.04 Ideas, please. And thanks. Also, I can clone any repo but I can't push. I had to change to https://codeplane.com/ due to this problem, but I want to understand why it happens. EDIT: I could clone one repo, but the other one just hangs at this point: felipelalli@felipelalli-Studio-XPS-8100:~/wa$ git clone [email protected]:felipelalli/micaroni.git Cloning into 'micaroni'... remote: Counting objects: 5238, done. remote: Compressing objects: 100% (3257/3257), done. Receiving objects: 92% (4839/5238), 43.29 MiB | 902 KiB/s

    Read the article

< Previous Page | 140 141 142 143 144 145 146 147 148 149 150 151  | Next Page >