Search Results

Search found 23102 results on 925 pages for 'browser width'.

Page 303/925 | < Previous Page | 299 300 301 302 303 304 305 306 307 308 309 310  | Next Page >

  • Stretch UL to fill the entire DIV

    - by Interfaith
    There is a similar post: Stretch horizontal ul to fit width of div But mine is a little bit tricky, as I have tried the above example but failed. My code: <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-… <html xmlns="http://www.w3.org/1999/xhtml"> <head> <meta http-equiv="Content-Type" content="text/html; charset=UTF-8" /> <title></title> <script type='text/javascript' src='https://ajax.googleapis.com/ajax/li… <script> function mainmenu(){ $(" #nav ul ").css({display: "none"}); // Opera Fix $(" #nav li").hover(function(){ $(this).find('ul:first').css({visibi… "visible",display: "none"}).show(400); },function(){ $(this).find('ul:first').css({visibi… "hidden"}); }); } $(document).ready(function(){ mainmenu(); }); </script> <style> body{ font-size:0.85em; font-family:Verdana, Arial, Helvetica, sans-serif; } #nav, #nav ul{ margin:0; padding:0; list-style-type:none; list-style-position:outside; position:relative; line-height:1.5em; display: table; width: 100%; } #nav a{ display:block; padding:10px 15px 10px 15px; border:1px solid #fff; color:#fff; text-align: center; margin:0; text-decoration:none; background: #C34328; border-top:1px solid #EF593B; -moz-box-shadow:0px 3px 4px #591E12 inset; -webkit-box-shadow:0px 3px 4px #591E12 inset; -box-shadow:0px 3px 4px #591E12 inset; } #nav a:hover{ background-color:#fff; color:#333; } #nav li{ float:left; position:relative; } #nav ul { position:absolute; display:none; width:12em; top:3.2em; } #nav li ul a{ width:12em; height:auto; float:left; } #nav ul ul{ top:auto; } #nav li ul ul { left:12em; margin:0px 0 0 10px; } </style> </head> <body> <div style="width: 980px; border: 1px black solid;"> <ul id="nav"> <li><a href="#">Find a Doctor</a></li> <li><a href="#">Why Interfaith</a></li> <li><a href="#">For Patients & Visitors</a> <ul> <li><a href="#">3.1 jQuery</a></li> <li><a href="#">3.2 Mootools</a></li> <li><a href="#">3.3 Prototype</a></li> </ul> </li> <li><a href="#">Medical Services</a> <ul> <li><a href="#">Behavioral Health</a></li> <li><a href="#">Clinical Laboratory</a></li> <li><a href="#">Dentistry</a></li> <li><a href="#">Emergency</a></li> <li><a href="#">Gynecology</a></li> <li><a href="#">Medicine</a></li> <li><a href="#">Pastoral</a></li> <li><a href="#">Pediatrics</a></li> <li><a href="#">Physical Medicine & Rehab</a></li> </ul> </li> <li><a href="#">Medical Trainings</a> <ul> <li><a href="#">Medical Training</a></li> <li><a href="#">Behavioral Health</a></li> <li><a href="#">Predoctoral Externship</a></li> <li><a href="#">Podiatric Residency</a></li> <li><a href="#">Dental Residency</a></li> <li><a href="#">Pulmonary Medicine</a></li> </ul> </li> <li><a href="#">Contact</a></li> </ul> </div> </body> </html> Can someone tell me where I have to edit to complete the code? Thanks

    Read the article

  • Unable to find the cause of an annoying content gap in my HTML/CSS?

    - by user1472747
    I'm quite new to CSS / HTML, and can't find the cause of this little bugger. I want it gone, so that the banner and the nav bar touch each other. Any help is greatly appreciated!! Here is the code for the site. I took out some of the irrelevant code. <!DOCTYPE html> <html> <!-- *****CSS CODE START*****--> <style type="text/css"> #container { margin: 0 auto; width: 900px; background: #fff; } #header { margin-top: 0px; } #header h1 { margin: 0; } #navigation { float: left; width: 900px; background: #333; } #navigation ul { margin: 0; padding: 0; } #navigation ul li { list-style-type: none; display: inline; } #navigation li a { display: block; float: left; padding: 5px 10px; color: #fff; text-decoration: none; border-right: 1px solid #fff; } #navigation li a:hover { background: #383; } #content-container { float: left; width: 900px; background: #fff url(/wp-content/uploads/layout-two-fixed-background.gif) repeat-y 100% 0; } #content { clear: left; float: left; width: 619px; height: 720px; padding: 10px 0; margin: 0 0 0 0px; display: inline; overflow: auto; } #content h2 { margin: 0; color: #003D5D; padding:10px; } #contentBody { padding:10px; font-size:22px; } #aside { float: right; width: 280px; padding: 20px 0; margin: 0 0px 0 0; display: inline; background: #cccccc; height: 700px; border-left: 1px solid #333 ; } #aside h3 { margin: 0 20px; color: #003D5D; font-family: Times New Roman; } #asideText { margin: 0 20px; font-family: Times New Roman;} #footer { clear: both; background: #ccc; text-align: right; padding: 5px; height: 1%; border-top: 1px solid #333 ; } </style> <!-- *****CSS CODE END***** --> <!-- *****HTML CODE START***** --> <body> <div id="container"> <div id="header"> <img src = file:///Users/jduffy/Desktop/projectSite/banner1.jpg> </img> </div> <div id="navigation"> <ul> <li><a href="file:///Users/jduffy/Desktop/projectSite/home">Home</a></li> <li><a href="file:///Users/jduffy/Desktop/projectSite/theProject">The Project</a></li> <li><a href="file:///Users/jduffy/Desktop/projectSite/Pictures">Pictures</a></li> <li><a href="file:///Users/jduffy/Desktop/projectSite/Contact">Contact us</a></li> </ul> </div> <div id="content-container"> <div id="content"> <h2> Page heading </h2> <div id="contentBody"> <p> home pagehome pagehome pagehome pagehome pagehome pagehome pagehome pagehome pagehome pagehome pagehome page home pagehome pagehome pagehome pagehome pagehome pagehome pagehome pagehome pagehome pagehome pagehome page home pagehome pagehome pagehome pagehome pagehome pagehome pagehome pagehome pagehome pagehome pagehome page </p> <p> test2 </p> <p> test3 </p> </div> </div> <div id="aside"> <div id="asideHeading"> <h3> Aside Heading </h3> </div> <div id="asideText"> <p> test5 </p> </div> </div> <div id="footer"> <text id="footerDate">0</text> </div> </div> </div> </body> <!-- *****HTML CODE END***** --> </html> <!-- *****JavaScript CODE START***** --> <script type="text/javascript"> /*date*/ var today = new Date(); document.getElementById("footerDate").innerHTML = today; </script> <!-- *****JavaScript CODE END***** -->

    Read the article

  • fatal error C1014: too many include files : depth = 1024

    - by numerical25
    I have no idea what this means. But here is the code that it supposely is happening in. //======================================================================================= // d3dApp.cpp by Frank Luna (C) 2008 All Rights Reserved. //======================================================================================= #include "d3dApp.h" #include <stream> LRESULT CALLBACK MainWndProc(HWND hwnd, UINT msg, WPARAM wParam, LPARAM lParam) { static D3DApp* app = 0; switch( msg ) { case WM_CREATE: { // Get the 'this' pointer we passed to CreateWindow via the lpParam parameter. CREATESTRUCT* cs = (CREATESTRUCT*)lParam; app = (D3DApp*)cs->lpCreateParams; return 0; } } // Don't start processing messages until after WM_CREATE. if( app ) return app->msgProc(msg, wParam, lParam); else return DefWindowProc(hwnd, msg, wParam, lParam); } D3DApp::D3DApp(HINSTANCE hInstance) { mhAppInst = hInstance; mhMainWnd = 0; mAppPaused = false; mMinimized = false; mMaximized = false; mResizing = false; mFrameStats = L""; md3dDevice = 0; mSwapChain = 0; mDepthStencilBuffer = 0; mRenderTargetView = 0; mDepthStencilView = 0; mFont = 0; mMainWndCaption = L"D3D10 Application"; md3dDriverType = D3D10_DRIVER_TYPE_HARDWARE; mClearColor = D3DXCOLOR(0.0f, 0.0f, 1.0f, 1.0f); mClientWidth = 800; mClientHeight = 600; } D3DApp::~D3DApp() { ReleaseCOM(mRenderTargetView); ReleaseCOM(mDepthStencilView); ReleaseCOM(mSwapChain); ReleaseCOM(mDepthStencilBuffer); ReleaseCOM(md3dDevice); ReleaseCOM(mFont); } HINSTANCE D3DApp::getAppInst() { return mhAppInst; } HWND D3DApp::getMainWnd() { return mhMainWnd; } int D3DApp::run() { MSG msg = {0}; mTimer.reset(); while(msg.message != WM_QUIT) { // If there are Window messages then process them. if(PeekMessage( &msg, 0, 0, 0, PM_REMOVE )) { TranslateMessage( &msg ); DispatchMessage( &msg ); } // Otherwise, do animation/game stuff. else { mTimer.tick(); if( !mAppPaused ) updateScene(mTimer.getDeltaTime()); else Sleep(50); drawScene(); } } return (int)msg.wParam; } void D3DApp::initApp() { initMainWindow(); initDirect3D(); D3DX10_FONT_DESC fontDesc; fontDesc.Height = 24; fontDesc.Width = 0; fontDesc.Weight = 0; fontDesc.MipLevels = 1; fontDesc.Italic = false; fontDesc.CharSet = DEFAULT_CHARSET; fontDesc.OutputPrecision = OUT_DEFAULT_PRECIS; fontDesc.Quality = DEFAULT_QUALITY; fontDesc.PitchAndFamily = DEFAULT_PITCH | FF_DONTCARE; wcscpy(fontDesc.FaceName, L"Times New Roman"); D3DX10CreateFontIndirect(md3dDevice, &fontDesc, &mFont); } void D3DApp::onResize() { // Release the old views, as they hold references to the buffers we // will be destroying. Also release the old depth/stencil buffer. ReleaseCOM(mRenderTargetView); ReleaseCOM(mDepthStencilView); ReleaseCOM(mDepthStencilBuffer); // Resize the swap chain and recreate the render target view. HR(mSwapChain->ResizeBuffers(1, mClientWidth, mClientHeight, DXGI_FORMAT_R8G8B8A8_UNORM, 0)); ID3D10Texture2D* backBuffer; HR(mSwapChain->GetBuffer(0, __uuidof(ID3D10Texture2D), reinterpret_cast<void**>(&backBuffer))); HR(md3dDevice->CreateRenderTargetView(backBuffer, 0, &mRenderTargetView)); ReleaseCOM(backBuffer); // Create the depth/stencil buffer and view. D3D10_TEXTURE2D_DESC depthStencilDesc; depthStencilDesc.Width = mClientWidth; depthStencilDesc.Height = mClientHeight; depthStencilDesc.MipLevels = 1; depthStencilDesc.ArraySize = 1; depthStencilDesc.Format = DXGI_FORMAT_D24_UNORM_S8_UINT; depthStencilDesc.SampleDesc.Count = 1; // multisampling must match depthStencilDesc.SampleDesc.Quality = 0; // swap chain values. depthStencilDesc.Usage = D3D10_USAGE_DEFAULT; depthStencilDesc.BindFlags = D3D10_BIND_DEPTH_STENCIL; depthStencilDesc.CPUAccessFlags = 0; depthStencilDesc.MiscFlags = 0; HR(md3dDevice->CreateTexture2D(&depthStencilDesc, 0, &mDepthStencilBuffer)); HR(md3dDevice->CreateDepthStencilView(mDepthStencilBuffer, 0, &mDepthStencilView)); // Bind the render target view and depth/stencil view to the pipeline. md3dDevice->OMSetRenderTargets(1, &mRenderTargetView, mDepthStencilView); // Set the viewport transform. D3D10_VIEWPORT vp; vp.TopLeftX = 0; vp.TopLeftY = 0; vp.Width = mClientWidth; vp.Height = mClientHeight; vp.MinDepth = 0.0f; vp.MaxDepth = 1.0f; md3dDevice->RSSetViewports(1, &vp); } void D3DApp::updateScene(float dt) { // Code computes the average frames per second, and also the // average time it takes to render one frame. static int frameCnt = 0; static float t_base = 0.0f; frameCnt++; // Compute averages over one second period. if( (mTimer.getGameTime() - t_base) >= 1.0f ) { float fps = (float)frameCnt; // fps = frameCnt / 1 float mspf = 1000.0f / fps; std::wostringstream outs; outs.precision(6); outs << L"FPS: " << fps << L"\n" << "Milliseconds: Per Frame: " << mspf; mFrameStats = outs.str(); // Reset for next average. frameCnt = 0; t_base += 1.0f; } } void D3DApp::drawScene() { md3dDevice->ClearRenderTargetView(mRenderTargetView, mClearColor); md3dDevice->ClearDepthStencilView(mDepthStencilView, D3D10_CLEAR_DEPTH|D3D10_CLEAR_STENCIL, 1.0f, 0); } LRESULT D3DApp::msgProc(UINT msg, WPARAM wParam, LPARAM lParam) { switch( msg ) { // WM_ACTIVATE is sent when the window is activated or deactivated. // We pause the game when the window is deactivated and unpause it // when it becomes active. case WM_ACTIVATE: if( LOWORD(wParam) == WA_INACTIVE ) { mAppPaused = true; mTimer.stop(); } else { mAppPaused = false; mTimer.start(); } return 0; // WM_SIZE is sent when the user resizes the window. case WM_SIZE: // Save the new client area dimensions. mClientWidth = LOWORD(lParam); mClientHeight = HIWORD(lParam); if( md3dDevice ) { if( wParam == SIZE_MINIMIZED ) { mAppPaused = true; mMinimized = true; mMaximized = false; } else if( wParam == SIZE_MAXIMIZED ) { mAppPaused = false; mMinimized = false; mMaximized = true; onResize(); } else if( wParam == SIZE_RESTORED ) { // Restoring from minimized state? if( mMinimized ) { mAppPaused = false; mMinimized = false; onResize(); } // Restoring from maximized state? else if( mMaximized ) { mAppPaused = false; mMaximized = false; onResize(); } else if( mResizing ) { // If user is dragging the resize bars, we do not resize // the buffers here because as the user continuously // drags the resize bars, a stream of WM_SIZE messages are // sent to the window, and it would be pointless (and slow) // to resize for each WM_SIZE message received from dragging // the resize bars. So instead, we reset after the user is // done resizing the window and releases the resize bars, which // sends a WM_EXITSIZEMOVE message. } else // API call such as SetWindowPos or mSwapChain->SetFullscreenState. { onResize(); } } } return 0; // WM_EXITSIZEMOVE is sent when the user grabs the resize bars. case WM_ENTERSIZEMOVE: mAppPaused = true; mResizing = true; mTimer.stop(); return 0; // WM_EXITSIZEMOVE is sent when the user releases the resize bars. // Here we reset everything based on the new window dimensions. case WM_EXITSIZEMOVE: mAppPaused = false; mResizing = false; mTimer.start(); onResize(); return 0; // WM_DESTROY is sent when the window is being destroyed. case WM_DESTROY: PostQuitMessage(0); return 0; // The WM_MENUCHAR message is sent when a menu is active and the user presses // a key that does not correspond to any mnemonic or accelerator key. case WM_MENUCHAR: // Don't beep when we alt-enter. return MAKELRESULT(0, MNC_CLOSE); // Catch this message so to prevent the window from becoming too small. case WM_GETMINMAXINFO: ((MINMAXINFO*)lParam)->ptMinTrackSize.x = 200; ((MINMAXINFO*)lParam)->ptMinTrackSize.y = 200; return 0; } return DefWindowProc(mhMainWnd, msg, wParam, lParam); } void D3DApp::initMainWindow() { WNDCLASS wc; wc.style = CS_HREDRAW | CS_VREDRAW; wc.lpfnWndProc = MainWndProc; wc.cbClsExtra = 0; wc.cbWndExtra = 0; wc.hInstance = mhAppInst; wc.hIcon = LoadIcon(0, IDI_APPLICATION); wc.hCursor = LoadCursor(0, IDC_ARROW); wc.hbrBackground = (HBRUSH)GetStockObject(NULL_BRUSH); wc.lpszMenuName = 0; wc.lpszClassName = L"D3DWndClassName"; if( !RegisterClass(&wc) ) { MessageBox(0, L"RegisterClass FAILED", 0, 0); PostQuitMessage(0); } // Compute window rectangle dimensions based on requested client area dimensions. RECT R = { 0, 0, mClientWidth, mClientHeight }; AdjustWindowRect(&R, WS_OVERLAPPEDWINDOW, false); int width = R.right - R.left; int height = R.bottom - R.top; mhMainWnd = CreateWindow(L"D3DWndClassName", mMainWndCaption.c_str(), WS_OVERLAPPEDWINDOW, CW_USEDEFAULT, CW_USEDEFAULT, width, height, 0, 0, mhAppInst, this); if( !mhMainWnd ) { MessageBox(0, L"CreateWindow FAILED", 0, 0); PostQuitMessage(0); } ShowWindow(mhMainWnd, SW_SHOW); UpdateWindow(mhMainWnd); } void D3DApp::initDirect3D() { // Fill out a DXGI_SWAP_CHAIN_DESC to describe our swap chain. DXGI_SWAP_CHAIN_DESC sd; sd.BufferDesc.Width = mClientWidth; sd.BufferDesc.Height = mClientHeight; sd.BufferDesc.RefreshRate.Numerator = 60; sd.BufferDesc.RefreshRate.Denominator = 1; sd.BufferDesc.Format = DXGI_FORMAT_R8G8B8A8_UNORM; sd.BufferDesc.ScanlineOrdering = DXGI_MODE_SCANLINE_ORDER_UNSPECIFIED; sd.BufferDesc.Scaling = DXGI_MODE_SCALING_UNSPECIFIED; // No multisampling. sd.SampleDesc.Count = 1; sd.SampleDesc.Quality = 0; sd.BufferUsage = DXGI_USAGE_RENDER_TARGET_OUTPUT; sd.BufferCount = 1; sd.OutputWindow = mhMainWnd; sd.Windowed = true; sd.SwapEffect = DXGI_SWAP_EFFECT_DISCARD; sd.Flags = 0; // Create the device. UINT createDeviceFlags = 0; #if defined(DEBUG) || defined(_DEBUG) createDeviceFlags |= D3D10_CREATE_DEVICE_DEBUG; #endif HR( D3D10CreateDeviceAndSwapChain( 0, //default adapter md3dDriverType, 0, // no software device createDeviceFlags, D3D10_SDK_VERSION, &sd, &mSwapChain, &md3dDevice) ); // The remaining steps that need to be carried out for d3d creation // also need to be executed every time the window is resized. So // just call the onResize method here to avoid code duplication. onResize(); }

    Read the article

  • ASP.NET Podcast Show #148 - ASP.NET WebForms to build a Mobile Web Application

    - by Wallym
    Check the podcast site for the original url. This is the video and source code for an ASP.NET WebForms app that I wrote that is optimized for the iPhone and mobile environments.  Subscribe to everything. Subscribe to WMV. Subscribe to M4V for iPhone/iPad. Subscribe to MP3. Download WMV. Download M4V for iPhone/iPad. Download MP3. Link to iWebKit. Source Code: <%@ Page Title="MapSplore" Language="C#" MasterPageFile="iPhoneMaster.master" AutoEventWireup="true" CodeFile="Default.aspx.cs" Inherits="AT_iPhone_Default" %> <asp:Content ID="Content1" ContentPlaceHolderID="head" Runat="Server"></asp:Content><asp:Content ID="Content2" ContentPlaceHolderID="Content" Runat="Server" ClientIDMode="Static">    <asp:ScriptManager ID="sm" runat="server"         EnablePartialRendering="true" EnableHistory="false" EnableCdn="true" />    <script type="text/javascript" src="http://maps.google.com/maps/api/js?sensor=true"></script>    <script  language="javascript"  type="text/javascript">    <!--    Sys.WebForms.PageRequestManager.getInstance().add_endRequest(endRequestHandle);    function endRequestHandle(sender, Args) {        setupMapDiv();        setupPlaceIveBeen();    }    function setupPlaceIveBeen() {        var mapPlaceIveBeen = document.getElementById('divPlaceIveBeen');        if (mapPlaceIveBeen != null) {            var PlaceLat = document.getElementById('<%=hdPlaceIveBeenLatitude.ClientID %>').value;            var PlaceLon = document.getElementById('<%=hdPlaceIveBeenLongitude.ClientID %>').value;            var PlaceTitle = document.getElementById('<%=lblPlaceIveBeenName.ClientID %>').innerHTML;            var latlng = new google.maps.LatLng(PlaceLat, PlaceLon);            var myOptions = {                zoom: 14,                center: latlng,                mapTypeId: google.maps.MapTypeId.ROADMAP            };            var map = new google.maps.Map(mapPlaceIveBeen, myOptions);            var marker = new google.maps.Marker({                position: new google.maps.LatLng(PlaceLat, PlaceLon),                map: map,                title: PlaceTitle,                clickable: false            });        }    }    function setupMapDiv() {        var mapdiv = document.getElementById('divImHere');        if (mapdiv != null) {            var PlaceLat = document.getElementById('<%=hdPlaceLat.ClientID %>').value;            var PlaceLon = document.getElementById('<%=hdPlaceLon.ClientID %>').value;            var PlaceTitle = document.getElementById('<%=hdPlaceTitle.ClientID %>').value;            var latlng = new google.maps.LatLng(PlaceLat, PlaceLon);            var myOptions = {                zoom: 14,                center: latlng,                mapTypeId: google.maps.MapTypeId.ROADMAP            };            var map = new google.maps.Map(mapdiv, myOptions);            var marker = new google.maps.Marker({                position: new google.maps.LatLng(PlaceLat, PlaceLon),                map: map,                title: PlaceTitle,                clickable: false            });        }     }    -->    </script>    <asp:HiddenField ID="Latitude" runat="server" />    <asp:HiddenField ID="Longitude" runat="server" />    <script type="text/javascript" src="http://ajax.googleapis.com/ajax/libs/jquery/1/jquery.min.js%22%3E%3C/script>    <script language="javascript" type="text/javascript">        $(document).ready(function () {            GetLocation();            setupMapDiv();            setupPlaceIveBeen();        });        function GetLocation() {            if (navigator.geolocation != null) {                navigator.geolocation.getCurrentPosition(getData);            }            else {                var mess = document.getElementById('<%=Message.ClientID %>');                mess.innerHTML = "Sorry, your browser does not support geolocation. " +                    "Try the latest version of Safari on the iPhone, Android browser, or the latest version of FireFox.";            }        }        function UpdateLocation_Click() {            GetLocation();        }        function getData(position) {            var latitude = position.coords.latitude;            var longitude = position.coords.longitude;            var hdLat = document.getElementById('<%=Latitude.ClientID %>');            var hdLon = document.getElementById('<%=Longitude.ClientID %>');            hdLat.value = latitude;            hdLon.value = longitude;        }    </script>    <asp:Label ID="Message" runat="server" />    <asp:UpdatePanel ID="upl" runat="server">        <ContentTemplate>    <asp:Panel ID="pnlStart" runat="server" Visible="true">    <div id="topbar">        <div id="title">MapSplore</div>    </div>    <div id="content">        <ul class="pageitem">            <li class="menu">                <asp:LinkButton ID="lbLocalDeals" runat="server" onclick="lbLocalDeals_Click">                <asp:Image ID="imLocalDeals" runat="server" ImageUrl="~/Images/ArtFavor_Money_Bag_Icon.png" Height="30" />                <span class="name">Local Deals.</span>                <span class="arrow"></span>                </asp:LinkButton>                </li>            <li class="menu">                <asp:LinkButton ID="lbLocalPlaces" runat="server" onclick="lbLocalPlaces_Click">                <asp:Image ID="imLocalPlaces" runat="server" ImageUrl="~/Images/Andy_Houses_on_the_horizon_-_Starburst_remix.png" Height="30" />                <span class="name">Local Places.</span>                <span class="arrow"></span>                </asp:LinkButton>                </li>            <li class="menu">                <asp:LinkButton ID="lbWhereIveBeen" runat="server" onclick="lbWhereIveBeen_Click">                <asp:Image ID="imImHere" runat="server" ImageUrl="~/Images/ryanlerch_flagpole.png" Height="30" />                <span class="name">I've been here.</span>                <span class="arrow"></span>                </asp:LinkButton>                </li>            <li class="menu">                <asp:LinkButton ID="lbMyStats" runat="server">                <asp:Image ID="imMyStats" runat="server" ImageUrl="~/Images/Anonymous_Spreadsheet.png" Height="30" />                <span class="name">My Stats.</span>                <span class="arrow"></span>                </asp:LinkButton>                </li>            <li class="menu">                <asp:LinkButton ID="lbAddAPlace" runat="server" onclick="lbAddAPlace_Click">                <asp:Image ID="imAddAPlace" runat="server" ImageUrl="~/Images/jean_victor_balin_add.png" Height="30" />                <span class="name">Add a Place.</span>                <span class="arrow"></span>                </asp:LinkButton>                </li>            <li class="button">                <input type="button" value="Update Your Current Location" onclick="UpdateLocation_Click()">                </li>        </ul>    </div>    </asp:Panel>    <div>    <asp:Panel ID="pnlCoupons" runat="server" Visible="false">        <div id="topbar">        <div id="title">MapSplore</div>        <div id="leftbutton">            <asp:LinkButton runat="server" Text="Return"                 ID="ReturnFromDeals" OnClick="ReturnFromDeals_Click" /></div></div>    <div class="content">    <asp:ListView ID="lvCoupons" runat="server">        <LayoutTemplate>            <ul class="pageitem" runat="server">                <asp:PlaceHolder ID="itemPlaceholder" runat="server" />            </ul>        </LayoutTemplate>        <ItemTemplate>            <li class="menu">                <asp:LinkButton ID="lbBusiness" runat="server" Text='<%#Eval("Place.Name") %>' OnClick="lbBusiness_Click">                    <span class="comment">                    <asp:Label ID="lblAddress" runat="server" Text='<%#Eval("Place.Address1") %>' />                    <asp:Label ID="lblDis" runat="server" Text='<%# Convert.ToString(Convert.ToInt32(Eval("Place.Distance"))) + " meters" %>' CssClass="smallText" />                    <asp:HiddenField ID="hdPlaceId" runat="server" Value='<%#Eval("PlaceId") %>' />                    <asp:HiddenField ID="hdGeoPromotionId" runat="server" Value='<%#Eval("GeoPromotionId") %>' />                    </span>                    <span class="arrow"></span>                </asp:LinkButton></li></ItemTemplate></asp:ListView><asp:GridView ID="gvCoupons" runat="server" AutoGenerateColumns="false">            <HeaderStyle BackColor="Silver" />            <AlternatingRowStyle BackColor="Wheat" />            <Columns>                <asp:TemplateField AccessibleHeaderText="Business" HeaderText="Business">                    <ItemTemplate>                        <asp:Image ID="imPlaceType" runat="server" Text='<%#Eval("Type") %>' ImageUrl='<%#Eval("Image") %>' />                        <asp:LinkButton ID="lbBusiness" runat="server" Text='<%#Eval("Name") %>' OnClick="lbBusiness_Click" />                        <asp:LinkButton ID="lblAddress" runat="server" Text='<%#Eval("Address1") %>' CssClass="smallText" />                        <asp:Label ID="lblDis" runat="server" Text='<%# Convert.ToString(Convert.ToInt32(Eval("Distance"))) + " meters" %>' CssClass="smallText" />                        <asp:HiddenField ID="hdPlaceId" runat="server" Value='<%#Eval("PlaceId") %>' />                        <asp:HiddenField ID="hdGeoPromotionId" runat="server" Value='<%#Eval("GeoPromotionId") %>' />                        <asp:Label ID="lblInfo" runat="server" Visible="false" />                    </ItemTemplate>                </asp:TemplateField>            </Columns>        </asp:GridView>    </div>    </asp:Panel>    <asp:Panel ID="pnlPlaces" runat="server" Visible="false">    <div id="topbar">        <div id="title">            MapSplore</div><div id="leftbutton">            <asp:LinkButton runat="server" Text="Return"                 ID="ReturnFromPlaces" OnClick="ReturnFromPlaces_Click" /></div></div>        <div id="content">        <asp:ListView ID="lvPlaces" runat="server">            <LayoutTemplate>                <ul id="ulPlaces" class="pageitem" runat="server">                    <asp:PlaceHolder ID="itemPlaceholder" runat="server" />                    <li class="menu">                        <asp:LinkButton ID="lbNotListed" runat="server" CssClass="name"                            OnClick="lbNotListed_Click">                            Place not listed                            <span class="arrow"></span>                            </asp:LinkButton>                    </li>                </ul>            </LayoutTemplate>            <ItemTemplate>            <li class="menu">                <asp:LinkButton ID="lbImHere" runat="server" CssClass="name"                     OnClick="lbImHere_Click">                <%#DisplayName(Eval("Name")) %>&nbsp;                <%# Convert.ToString(Convert.ToInt32(Eval("Distance"))) + " meters" %>                <asp:HiddenField ID="hdPlaceId" runat="server" Value='<%#Eval("PlaceId") %>' />                <span class="arrow"></span>                </asp:LinkButton></li></ItemTemplate></asp:ListView>    </div>    </asp:Panel>    <asp:Panel ID="pnlImHereNow" runat="server" Visible="false">        <div id="topbar">        <div id="title">            MapSplore</div><div id="leftbutton">            <asp:LinkButton runat="server" Text="Places"                 ID="lbImHereNowReturn" OnClick="lbImHereNowReturn_Click" /></div></div>            <div id="rightbutton">            <asp:LinkButton runat="server" Text="Beginning"                ID="lbBackToBeginning" OnClick="lbBackToBeginning_Click" />            </div>        <div id="content">        <ul class="pageitem">        <asp:HiddenField ID="hdPlaceId" runat="server" />        <asp:HiddenField ID="hdPlaceLat" runat="server" />        <asp:HiddenField ID="hdPlaceLon" runat="server" />        <asp:HiddenField ID="hdPlaceTitle" runat="server" />        <asp:Button ID="btnImHereNow" runat="server"             Text="I'm here" OnClick="btnImHereNow_Click" />             <asp:Label ID="lblPlaceTitle" runat="server" /><br />        <asp:TextBox ID="txtWhatsHappening" runat="server" TextMode="MultiLine" Rows="2" style="width:300px" /><br />        <div id="divImHere" style="width:300px; height:300px"></div>        </div>        </ul>    </asp:Panel>    <asp:Panel runat="server" ID="pnlIveBeenHere" Visible="false">        <div id="topbar">        <div id="title">            Where I've been</div><div id="leftbutton">            <asp:LinkButton ID="lbIveBeenHereBack" runat="server" Text="Back" OnClick="lbIveBeenHereBack_Click" /></div></div>        <div id="content">        <asp:ListView ID="lvWhereIveBeen" runat="server">            <LayoutTemplate>                <ul id="ulWhereIveBeen" class="pageitem" runat="server">                    <asp:PlaceHolder ID="itemPlaceholder" runat="server" />                </ul>            </LayoutTemplate>            <ItemTemplate>            <li class="menu" runat="server">                <asp:LinkButton ID="lbPlaceIveBeen" runat="server" OnClick="lbPlaceIveBeen_Click" CssClass="name">                    <asp:Label ID="lblPlace" runat="server" Text='<%#Eval("PlaceName") %>' /> at                    <asp:Label ID="lblTime" runat="server" Text='<%#Eval("ATTime") %>' CssClass="content" />                    <asp:HiddenField ID="hdATID" runat="server" Value='<%#Eval("ATID") %>' />                    <span class="arrow"></span>                </asp:LinkButton>            </li>            </ItemTemplate>        </asp:ListView>        </div>        </asp:Panel>    <asp:Panel runat="server" ID="pnlPlaceIveBeen" Visible="false">        <div id="topbar">        <div id="title">            I've been here        </div>        <div id="leftbutton">            <asp:LinkButton ID="lbPlaceIveBeenBack" runat="server" Text="Back" OnClick="lbPlaceIveBeenBack_Click" />        </div>        <div id="rightbutton">            <asp:LinkButton ID="lbPlaceIveBeenBeginning" runat="server" Text="Beginning" OnClick="lbPlaceIveBeenBeginning_Click" />        </div>        </div>        <div id="content">            <ul class="pageitem">            <li>            <asp:HiddenField ID="hdPlaceIveBeenPlaceId" runat="server" />            <asp:HiddenField ID="hdPlaceIveBeenLatitude" runat="server" />            <asp:HiddenField ID="hdPlaceIveBeenLongitude" runat="server" />            <asp:Label ID="lblPlaceIveBeenName" runat="server" /><br />            <asp:Label ID="lblPlaceIveBeenAddress" runat="server" /><br />            <asp:Label ID="lblPlaceIveBeenCity" runat="server" />,             <asp:Label ID="lblPlaceIveBeenState" runat="server" />            <asp:Label ID="lblPlaceIveBeenZipCode" runat="server" /><br />            <asp:Label ID="lblPlaceIveBeenCountry" runat="server" /><br />            <div id="divPlaceIveBeen" style="width:300px; height:300px"></div>            </li>            </ul>        </div>                </asp:Panel>         <asp:Panel ID="pnlAddPlace" runat="server" Visible="false">                <div id="topbar"><div id="title">MapSplore</div><div id="leftbutton"><asp:LinkButton ID="lbAddPlaceReturn" runat="server" Text="Back" OnClick="lbAddPlaceReturn_Click" /></div><div id="rightnav"></div></div><div id="content">    <ul class="pageitem">        <li id="liPlaceAddMessage" runat="server" visible="false">        <asp:Label ID="PlaceAddMessage" runat="server" />        </li>        <li class="bigfield">        <asp:TextBox ID="txtPlaceName" runat="server" placeholder="Name of Establishment" />        </li>        <li class="bigfield">        <asp:TextBox ID="txtAddress1" runat="server" placeholder="Address 1" />        </li>        <li class="bigfield">        <asp:TextBox ID="txtCity" runat="server" placeholder="City" />        </li>        <li class="select">        <asp:DropDownList ID="ddlProvince" runat="server" placeholder="Select State" />          <span class="arrow"></span>              </li>        <li class="bigfield">        <asp:TextBox ID="txtZipCode" runat="server" placeholder="Zip Code" />        </li>        <li class="select">        <asp:DropDownList ID="ddlCountry" runat="server"             onselectedindexchanged="ddlCountry_SelectedIndexChanged" />        <span class="arrow"></span>        </li>        <li class="bigfield">        <asp:TextBox ID="txtPhoneNumber" runat="server" placeholder="Phone Number" />        </li>        <li class="checkbox">            <span class="name">You Here Now:</span> <asp:CheckBox ID="cbYouHereNow" runat="server" Checked="true" />        </li>        <li class="button">        <asp:Button ID="btnAdd" runat="server" Text="Add Place"             onclick="btnAdd_Click" />        </li>    </ul></div>        </asp:Panel>        <asp:Panel ID="pnlImHere" runat="server" Visible="false">            <asp:TextBox ID="txtImHere" runat="server"                 TextMode="MultiLine" Rows="3" Columns="40" /><br />            <asp:DropDownList ID="ddlPlace" runat="server" /><br />            <asp:Button ID="btnHere" runat="server" Text="Tell Everyone I'm Here"                 onclick="btnHere_Click" /><br />        </asp:Panel>     </div>    </ContentTemplate>    </asp:UpdatePanel> </asp:Content> Code Behind .cs file: using System;using System.Collections.Generic;using System.Linq;using System.Web;using System.Web.Security;using System.Web.UI;using System.Web.UI.HtmlControls;using System.Web.UI.WebControls;using LocationDataModel; public partial class AT_iPhone_Default : ViewStatePage{    private iPhoneDevice ipd;     protected void Page_Load(object sender, EventArgs e)    {        LocationDataEntities lde = new LocationDataEntities();        if (!Page.IsPostBack)        {            var Countries = from c in lde.Countries select c;            foreach (Country co in Countries)            {                ddlCountry.Items.Add(new ListItem(co.Name, co.CountryId.ToString()));            }            ddlCountry_SelectedIndexChanged(ddlCountry, null);            if (AppleIPhone.IsIPad())                ipd = iPhoneDevice.iPad;            if (AppleIPhone.IsIPhone())                ipd = iPhoneDevice.iPhone;            if (AppleIPhone.IsIPodTouch())                ipd = iPhoneDevice.iPodTouch;        }    }    protected void btnPlaces_Click(object sender, EventArgs e)    {    }    protected void btnAdd_Click(object sender, EventArgs e)    {        bool blImHere = cbYouHereNow.Checked;        string Place = txtPlaceName.Text,            Address1 = txtAddress1.Text,            City = txtCity.Text,            ZipCode = txtZipCode.Text,            PhoneNumber = txtPhoneNumber.Text,            ProvinceId = ddlProvince.SelectedItem.Value,            CountryId = ddlCountry.SelectedItem.Value;        int iProvinceId, iCountryId;        double dLatitude, dLongitude;        DataAccess da = new DataAccess();        if ((!String.IsNullOrEmpty(ProvinceId)) &&            (!String.IsNullOrEmpty(CountryId)))        {            iProvinceId = Convert.ToInt32(ProvinceId);            iCountryId = Convert.ToInt32(CountryId);            if (blImHere)            {                dLatitude = Convert.ToDouble(Latitude.Value);                dLongitude = Convert.ToDouble(Longitude.Value);                da.StorePlace(Place, Address1, String.Empty, City,                    iProvinceId, ZipCode, iCountryId, PhoneNumber,                    dLatitude, dLongitude);            }            else            {                da.StorePlace(Place, Address1, String.Empty, City,                    iProvinceId, ZipCode, iCountryId, PhoneNumber);            }            liPlaceAddMessage.Visible = true;            PlaceAddMessage.Text = "Awesome, your place has been added. Add Another!";            txtPlaceName.Text = String.Empty;            txtAddress1.Text = String.Empty;            txtCity.Text = String.Empty;            ddlProvince.SelectedIndex = -1;            txtZipCode.Text = String.Empty;            txtPhoneNumber.Text = String.Empty;        }        else        {            liPlaceAddMessage.Visible = true;            PlaceAddMessage.Text = "Please select a State and a Country.";        }    }    protected void ddlCountry_SelectedIndexChanged(object sender, EventArgs e)    {        string CountryId = ddlCountry.SelectedItem.Value;        if (!String.IsNullOrEmpty(CountryId))        {            int iCountryId = Convert.ToInt32(CountryId);            LocationDataModel.LocationDataEntities lde = new LocationDataModel.LocationDataEntities();            var prov = from p in lde.Provinces where p.CountryId == iCountryId                        orderby p.ProvinceName select p;                        ddlProvince.Items.Add(String.Empty);            foreach (Province pr in prov)            {                ddlProvince.Items.Add(new ListItem(pr.ProvinceName, pr.ProvinceId.ToString()));            }        }        else        {            ddlProvince.Items.Clear();        }    }    protected void btnImHere_Click(object sender, EventArgs e)    {        int i = 0;        DataAccess da = new DataAccess();        double Lat = Convert.ToDouble(Latitude.Value),            Lon = Convert.ToDouble(Longitude.Value);        List<Place> lp = da.NearByLocations(Lat, Lon);        foreach (Place p in lp)        {            ListItem li = new ListItem(p.Name, p.PlaceId.ToString());            if (i == 0)            {                li.Selected = true;            }            ddlPlace.Items.Add(li);            i++;        }        pnlAddPlace.Visible = false;        pnlImHere.Visible = true;    }    protected void lbImHere_Click(object sender, EventArgs e)    {        string UserName = Membership.GetUser().UserName;        ListViewItem lvi = (ListViewItem)(((LinkButton)sender).Parent);        HiddenField hd = (HiddenField)lvi.FindControl("hdPlaceId");        long PlaceId = Convert.ToInt64(hd.Value);        double dLatitude = Convert.ToDouble(Latitude.Value);        double dLongitude = Convert.ToDouble(Longitude.Value);        DataAccess da = new DataAccess();        Place pl = da.GetPlace(PlaceId);        pnlImHereNow.Visible = true;        pnlPlaces.Visible = false;        hdPlaceId.Value = PlaceId.ToString();        hdPlaceLat.Value = pl.Latitude.ToString();        hdPlaceLon.Value = pl.Longitude.ToString();        hdPlaceTitle.Value = pl.Name;        lblPlaceTitle.Text = pl.Name;    }    protected void btnHere_Click(object sender, EventArgs e)    {        string UserName = Membership.GetUser().UserName;        string WhatsH = txtImHere.Text;        long PlaceId = Convert.ToInt64(ddlPlace.SelectedValue);        double dLatitude = Convert.ToDouble(Latitude.Value);        double dLongitude = Convert.ToDouble(Longitude.Value);        DataAccess da = new DataAccess();        da.StoreUserAT(UserName, PlaceId, WhatsH,            dLatitude, dLongitude);    }    protected void btnLocalCoupons_Click(object sender, EventArgs e)    {        double dLatitude = Convert.ToDouble(Latitude.Value);        double dLongitude = Convert.ToDouble(Longitude.Value);        DataAccess da = new DataAccess();     }    protected void lbBusiness_Click(object sender, EventArgs e)    {        string UserName = Membership.GetUser().UserName;        GridViewRow gvr = (GridViewRow)(((LinkButton)sender).Parent.Parent);        HiddenField hd = (HiddenField)gvr.FindControl("hdPlaceId");        string sPlaceId = hd.Value;        Int64 PlaceId;        if (!String.IsNullOrEmpty(sPlaceId))        {            PlaceId = Convert.ToInt64(sPlaceId);        }    }    protected void lbLocalDeals_Click(object sender, EventArgs e)    {        double dLatitude = Convert.ToDouble(Latitude.Value);        double dLongitude = Convert.ToDouble(Longitude.Value);        DataAccess da = new DataAccess();        pnlCoupons.Visible = true;        pnlStart.Visible = false;        List<GeoPromotion> lgp = da.NearByDeals(dLatitude, dLongitude);        lvCoupons.DataSource = lgp;        lvCoupons.DataBind();    }    protected void lbLocalPlaces_Click(object sender, EventArgs e)    {        DataAccess da = new DataAccess();        double Lat = Convert.ToDouble(Latitude.Value);        double Lon = Convert.ToDouble(Longitude.Value);        List<LocationDataModel.Place> places = da.NearByLocations(Lat, Lon);        lvPlaces.DataSource = places;        lvPlaces.SelectedIndex = -1;        lvPlaces.DataBind();        pnlPlaces.Visible = true;        pnlStart.Visible = false;    }    protected void ReturnFromPlaces_Click(object sender, EventArgs e)    {        pnlPlaces.Visible = false;        pnlStart.Visible = true;    }    protected void ReturnFromDeals_Click(object sender, EventArgs e)    {        pnlCoupons.Visible = false;        pnlStart.Visible = true;    }    protected void btnImHereNow_Click(object sender, EventArgs e)    {        long PlaceId = Convert.ToInt32(hdPlaceId.Value);        string UserName = Membership.GetUser().UserName;        string WhatsHappening = txtWhatsHappening.Text;        double UserLat = Convert.ToDouble(Latitude.Value);        double UserLon = Convert.ToDouble(Longitude.Value);        DataAccess da = new DataAccess();        da.StoreUserAT(UserName, PlaceId, WhatsHappening,             UserLat, UserLon);    }    protected void lbImHereNowReturn_Click(object sender, EventArgs e)    {        pnlImHereNow.Visible = false;        pnlPlaces.Visible = true;    }    protected void lbBackToBeginning_Click(object sender, EventArgs e)    {        pnlStart.Visible = true;        pnlImHereNow.Visible = false;    }    protected void lbWhereIveBeen_Click(object sender, EventArgs e)    {        string UserName = Membership.GetUser().UserName;        pnlStart.Visible = false;        pnlIveBeenHere.Visible = true;        DataAccess da = new DataAccess();        lvWhereIveBeen.DataSource = da.UserATs(UserName, 0, 15);        lvWhereIveBeen.DataBind();    }    protected void lbIveBeenHereBack_Click(object sender, EventArgs e)    {        pnlIveBeenHere.Visible = false;        pnlStart.Visible = true;    }     protected void lbPlaceIveBeen_Click(object sender, EventArgs e)    {        LinkButton lb = (LinkButton)sender;        ListViewItem lvi = (ListViewItem)lb.Parent.Parent;        HiddenField hdATID = (HiddenField)lvi.FindControl("hdATID");        Int64 ATID = Convert.ToInt64(hdATID.Value);        DataAccess da = new DataAccess();        pnlIveBeenHere.Visible = false;        pnlPlaceIveBeen.Visible = true;        var plac = da.GetPlaceViaATID(ATID);        hdPlaceIveBeenPlaceId.Value = plac.PlaceId.ToString();        hdPlaceIveBeenLatitude.Value = plac.Latitude.ToString();        hdPlaceIveBeenLongitude.Value = plac.Longitude.ToString();        lblPlaceIveBeenName.Text = plac.Name;        lblPlaceIveBeenAddress.Text = plac.Address1;        lblPlaceIveBeenCity.Text = plac.City;        lblPlaceIveBeenState.Text = plac.Province.ProvinceName;        lblPlaceIveBeenZipCode.Text = plac.ZipCode;        lblPlaceIveBeenCountry.Text = plac.Country.Name;    }     protected void lbNotListed_Click(object sender, EventArgs e)    {        SetupAddPoint();        pnlPlaces.Visible = false;    }     protected void lbAddAPlace_Click(object sender, EventArgs e)    {        SetupAddPoint();    }     private void SetupAddPoint()    {        double lat = Convert.ToDouble(Latitude.Value);        double lon = Convert.ToDouble(Longitude.Value);        DataAccess da = new DataAccess();        var zip = da.WhereAmIAt(lat, lon);        if (zip.Count > 0)        {            var z0 = zip[0];            txtCity.Text = z0.City;            txtZipCode.Text = z0.ZipCode;            ddlProvince.ClearSelection();            if (z0.ProvinceId.HasValue == true)            {                foreach (ListItem li in ddlProvince.Items)                {                    if (li.Value == z0.ProvinceId.Value.ToString())                    {                        li.Selected = true;                        break;                    }                }            }        }        pnlAddPlace.Visible = true;        pnlStart.Visible = false;    }    protected void lbAddPlaceReturn_Click(object sender, EventArgs e)    {        pnlAddPlace.Visible = false;        pnlStart.Visible = true;        liPlaceAddMessage.Visible = false;        PlaceAddMessage.Text = String.Empty;    }    protected void lbPlaceIveBeenBack_Click(object sender, EventArgs e)    {        pnlIveBeenHere.Visible = true;        pnlPlaceIveBeen.Visible = false;            }    protected void lbPlaceIveBeenBeginning_Click(object sender, EventArgs e)    {        pnlPlaceIveBeen.Visible = false;        pnlStart.Visible = true;    }    protected string DisplayName(object val)    {        string strVal = Convert.ToString(val);         if (AppleIPhone.IsIPad())        {            ipd = iPhoneDevice.iPad;        }        if (AppleIPhone.IsIPhone())        {            ipd = iPhoneDevice.iPhone;        }        if (AppleIPhone.IsIPodTouch())        {            ipd = iPhoneDevice.iPodTouch;        }        return (iPhoneHelper.DisplayContentOnMenu(strVal, ipd));    }} iPhoneHelper.cs file: using System;using System.Collections.Generic;using System.Linq;using System.Web; public enum iPhoneDevice{    iPhone, iPodTouch, iPad}/// <summary>/// Summary description for iPhoneHelper/// </summary>/// public class iPhoneHelper{ public iPhoneHelper() {  //  // TODO: Add constructor logic here  // } // This code is stupid in retrospect. Use css to solve this problem      public static string DisplayContentOnMenu(string val, iPhoneDevice ipd)    {        string Return = val;        string Elipsis = "...";        int iPadMaxLength = 30;        int iPhoneMaxLength = 15;        if (ipd == iPhoneDevice.iPad)        {            if (Return.Length > iPadMaxLength)            {                Return = Return.Substring(0, iPadMaxLength - Elipsis.Length) + Elipsis;            }        }        else        {            if (Return.Length > iPhoneMaxLength)            {                Return = Return.Substring(0, iPhoneMaxLength - Elipsis.Length) + Elipsis;            }        }        return (Return);    }}  Source code for the ViewStatePage: using System;using System.Data;using System.Data.SqlClient;using System.Configuration;using System.Web;using System.Web.Security;using System.Web.UI;using System.Web.UI.WebControls;using System.Web.UI.WebControls.WebParts;using System.Web.UI.HtmlControls; /// <summary>/// Summary description for BasePage/// </summary>#region Base class for a page.public class ViewStatePage : System.Web.UI.Page{     PageStatePersisterToDatabase myPageStatePersister;        public ViewStatePage()        : base()    {        myPageStatePersister = new PageStatePersisterToDatabase(this);    }     protected override PageStatePersister PageStatePersister    {        get        {            return myPageStatePersister;        }    } }#endregion #region This class will override the page persistence to store page state in a database.public class PageStatePersisterToDatabase : PageStatePersister{    private string ViewStateKeyField = "__VIEWSTATE_KEY";    private string _exNoConnectionStringFound = "No Database Configuration information is in the web.config.";     public PageStatePersisterToDatabase(Page page)        : base(page)    {    }     public override void Load()    {         // Get the cache key from the web form data        System.Int64 key = Convert.ToInt64(Page.Request.Params[ViewStateKeyField]);         Pair state = this.LoadState(key);         // Abort if cache object is not of type Pair        if (state == null)            throw new ApplicationException("Missing valid " + ViewStateKeyField);         // Set view state and control state        ViewState = state.First;        ControlState = state.Second;    }     public override void Save()    {         // No processing needed if no states available        if (ViewState == null && ControlState != null)            return;         System.Int64 key;        IStateFormatter formatter = this.StateFormatter;        Pair statePair = new Pair(ViewState, ControlState);         // Serialize the statePair object to a string.        string serializedState = formatter.Serialize(statePair);         // Save the ViewState and get a unique identifier back.        key = SaveState(serializedState);         // Register hidden field to store cache key in        // Page.ClientScript does not work properly with Atlas.        //Page.ClientScript.RegisterHiddenField(ViewStateKeyField, key.ToString());        ScriptManager.RegisterHiddenField(this.Page, ViewStateKeyField, key.ToString());    }     private System.Int64 SaveState(string PageState)    {        System.Int64 i64Key = 0;        string strConn = String.Empty,            strProvider = String.Empty;         string strSql = "insert into tblPageState ( SerializedState ) values ( '" + SqlEscape(PageState) + "');select scope_identity();";        SqlConnection sqlCn;        SqlCommand sqlCm;        try        {            GetDBConnectionString(ref strConn, ref strProvider);            sqlCn = new SqlConnection(strConn);            sqlCm = new SqlCommand(strSql, sqlCn);            sqlCn.Open();            i64Key = Convert.ToInt64(sqlCm.ExecuteScalar());            if (sqlCn.State != ConnectionState.Closed)            {                sqlCn.Close();            }            sqlCn.Dispose();            sqlCm.Dispose();        }        finally        {            sqlCn = null;            sqlCm = null;        }        return i64Key;    }     private Pair LoadState(System.Int64 iKey)    {        string strConn = String.Empty,            strProvider = String.Empty,            SerializedState = String.Empty,            strMinutesInPast = GetMinutesInPastToDelete();        Pair PageState;        string strSql = "select SerializedState from tblPageState where tblPageStateID=" + iKey.ToString() + ";" +            "delete from tblPageState where DateUpdated<DateAdd(mi, " + strMinutesInPast + ", getdate());";        SqlConnection sqlCn;        SqlCommand sqlCm;        try        {            GetDBConnectionString(ref strConn, ref strProvider);            sqlCn = new SqlConnection(strConn);            sqlCm = new SqlCommand(strSql, sqlCn);             sqlCn.Open();            SerializedState = Convert.ToString(sqlCm.ExecuteScalar());            IStateFormatter formatter = this.StateFormatter;             if ((null == SerializedState) ||                (String.Empty == SerializedState))            {                throw (new ApplicationException("No ViewState records were returned."));            }             // Deserilize returns the Pair object that is serialized in            // the Save method.            PageState = (Pair)formatter.Deserialize(SerializedState);             if (sqlCn.State != ConnectionState.Closed)            {                sqlCn.Close();            }            sqlCn.Dispose();            sqlCm.Dispose();        }        finally        {            sqlCn = null;            sqlCm = null;        }        return PageState;    }     private string SqlEscape(string Val)    {        string ReturnVal = String.Empty;        if (null != Val)        {            ReturnVal = Val.Replace("'", "''");        }        return (ReturnVal);    }    private void GetDBConnectionString(ref string ConnectionStringValue, ref string ProviderNameValue)    {        if (System.Configuration.ConfigurationManager.ConnectionStrings.Count > 0)        {            ConnectionStringValue = System.Configuration.ConfigurationManager.ConnectionStrings["ApplicationServices"].ConnectionString;            ProviderNameValue = System.Configuration.ConfigurationManager.ConnectionStrings["ApplicationServices"].ProviderName;        }        else        {            throw new ConfigurationErrorsException(_exNoConnectionStringFound);        }    }    private string GetMinutesInPastToDelete()    {        string strReturn = "-60";        if (null != System.Configuration.ConfigurationManager.AppSettings["MinutesInPastToDeletePageState"])        {            strReturn = System.Configuration.ConfigurationManager.AppSettings["MinutesInPastToDeletePageState"].ToString();        }        return (strReturn);    }}#endregion AppleiPhone.cs file: using System;using System.Collections.Generic;using System.Linq;using System.Web; /// <summary>/// Summary description for AppleIPhone/// </summary>public class AppleIPhone{ public AppleIPhone() {  //  // TODO: Add constructor logic here  // }     static public bool IsIPhoneOS()    {        return (IsIPad() || IsIPhone() || IsIPodTouch());    }     static public bool IsIPhone()    {        return IsTest("iPhone");    }     static public bool IsIPodTouch()    {        return IsTest("iPod");    }     static public bool IsIPad()    {        return IsTest("iPad");    }     static private bool IsTest(string Agent)    {        bool bl = false;        string ua = HttpContext.Current.Request.UserAgent.ToLower();        try        {            bl = ua.Contains(Agent.ToLower());        }        catch { }        return (bl);        }} Master page .cs: using System;using System.Collections.Generic;using System.Linq;using System.Web;using System.Web.UI;using System.Web.UI.HtmlControls;using System.Web.UI.WebControls; public partial class MasterPages_iPhoneMaster : System.Web.UI.MasterPage{    protected void Page_Load(object sender, EventArgs e)    {            HtmlHead head = Page.Header;            HtmlMeta meta = new HtmlMeta();            if (AppleIPhone.IsIPad() == true)            {                meta.Content = "width=400,user-scalable=no";                head.Controls.Add(meta);             }            else            {                meta.Content = "width=device-width, user-scalable=no";                meta.Attributes.Add("name", "viewport");            }            meta.Attributes.Add("name", "viewport");            head.Controls.Add(meta);            HtmlLink cssLink = new HtmlLink();            HtmlGenericControl script = new HtmlGenericControl("script");            script.Attributes.Add("type", "text/javascript");            script.Attributes.Add("src", ResolveUrl("~/Scripts/iWebKit/javascript/functions.js"));            head.Controls.Add(script);            cssLink.Attributes.Add("rel", "stylesheet");            cssLink.Attributes.Add("href", ResolveUrl("~/Scripts/iWebKit/css/style.css") );            cssLink.Attributes.Add("type", "text/css");            head.Controls.Add(cssLink);            HtmlGenericControl jsLink = new HtmlGenericControl("script");            //jsLink.Attributes.Add("type", "text/javascript");            //jsLink.Attributes.Add("src", ResolveUrl("~/Scripts/jquery-1.4.1.min.js") );            //head.Controls.Add(jsLink);            HtmlLink appleIcon = new HtmlLink();            appleIcon.Attributes.Add("rel", "apple-touch-icon");            appleIcon.Attributes.Add("href", ResolveUrl("~/apple-touch-icon.png"));            HtmlMeta appleMobileWebAppStatusBarStyle = new HtmlMeta();            appleMobileWebAppStatusBarStyle.Attributes.Add("name", "apple-mobile-web-app-status-bar-style");            appleMobileWebAppStatusBarStyle.Attributes.Add("content", "black");            head.Controls.Add(appleMobileWebAppStatusBarStyle);    }     internal string FindPath(string Location)    {        string Url = Server.MapPath(Location);        return (Url);    }}

    Read the article

  • Blending the Sketchflow Action

    - by GeekAgilistMercenary
    Started a new Sketchflow Prototype in Expression Blend recently and documented each of the steps.  This blog entry covers some of those steps, which are the basic elements of any prototype.  I will have more information regarding design, prototype creation, and the process of the initial phases for development in the future.  For now, I hope you enjoy this short walk through.  Also, be sure to check out my last quick entry on Sketchflow. I started off with a Sketchflow Project, just like I did in my previous entry (more specifics in that entry about how to manipulate and build out the Sketchflow Map). Once I created the project I setup the following Sketchflow Map. The CoreNavigation is a ComponentScreen setup solely for the page navigation at the top of the screen.  The XAML markup in case you want to create a Component Screen with the same design is included below. <UserControl xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" xmlns:d="http://schemas.microsoft.com/expression/blend/2008" xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006" mc:Ignorable="d" xmlns:i="clr-namespace:System.Windows.Interactivity;assembly=System.Windows.Interactivity" xmlns:pb="clr-namespace:Microsoft.Expression.Prototyping.Behavior;assembly=Microsoft.Expression.Prototyping.Interactivity" x:Class="RapidPrototypeSketchScreens.CoreNavigation" d:DesignWidth="624" d:DesignHeight="49" Height="49" Width="624">   <Grid x:Name="LayoutRoot"> <TextBlock HorizontalAlignment="Stretch" Margin="307,3,0,0" Style="{StaticResource TitleCenter-Sketch}" Text="Aütøchart Scorecards" TextWrapping="Wrap"> <i:Interaction.Triggers> <i:EventTrigger EventName="MouseLeftButtonDown"> <pb:NavigateToScreenAction TargetScreen="RapidPrototypeSketchScreens.Screen_1"/> </i:EventTrigger> </i:Interaction.Triggers> </TextBlock> <Button HorizontalAlignment="Left" Margin="164,8,0,11" Style="{StaticResource Button-Sketch}" Width="144" Content="Scorecard"> <i:Interaction.Triggers> <i:EventTrigger EventName="Click"> <pb:NavigateToScreenAction TargetScreen="RapidPrototypeSketchScreens.Screen_1_2"/> </i:EventTrigger> </i:Interaction.Triggers> </Button> <Button HorizontalAlignment="Left" Margin="8,8,0,11" Style="{StaticResource Button-Sketch}" Width="152" Content="Standard Reports"> <i:Interaction.Triggers> <i:EventTrigger EventName="Click"> <pb:NavigateToScreenAction TargetScreen="RapidPrototypeSketchScreens.Screen_1_1"/> </i:EventTrigger> </i:Interaction.Triggers> </Button> </Grid> </UserControl> Now that the CoreNavigation Component Screen is done I built out each of the others.  In each of those screens I included the CoreNavigation Screen (all those little green lines in the image) as the top navigation.  In order to do that, as I created each of the pages I would hover over the CoreNavigation Object in the Sketchflow Map.  When the utilities drawer (the small menu that pops down under a node when you hover over it) shows click on the third little icon and drag it onto the page node you want a navigation screen on. Once I created all the screens I setup the navigation by opening up each screen and right clicking on the objects that needed to point to somewhere else in the prototype. Once I was done with the main page, my Home Navigation Page, it looked something like this in the Expression Blend Designer. I fleshed out each of the additional screens.  Once I was done I wanted to try out the deployment package.  The way to deploy a Sketchflow Prototype is to merely click on File –> Package SketchFlow Project and a prompt will appear.  In the prompt enter what you want the package to be called. I like to see the files generated afterwards too, so I checked the box to see that.  When Expression Blend is done generating everything you’ll have a directory like the one shown below, with all the needed files for deployment. Now these files can be copied or moved to any location for viewing.  One can even copy them (such as via FTP) to a server location to share with others.  Once they are deployed and you run the "TestPage.html" the other features of the Sketchflow Package are available. In the image below I have tagged a few sections to show the Sketchflow Player Features.  To the top left is the navigation, which provides a clearly defined area of movement in a list.  To the center right is the actual prototype application.  I have placed lists of things and made edits.  On the left hand side is the highlight feature, which is available in the Feedback section of the lower left.  On the right hand list I underlined the Autochart with an orange marker, and marked out two list items with a red marker. In the lower left hand side in the Feedback section is also an area to type in your feedback.  This can be useful for time based feedback, when you post this somewhere and want people to provide subsequent follow up feedback. Overall lots of great features, that enable some fairly rapid prototyping with customers.  Once one is familiar with the steps and parts of this Sketchflow Prototype Capabilities it is easy to step through an application without even stopping.  It really is that easy.  So get hold of Expression Blend 3 and get ramped up on Sketchflow, it will pay off in the design phases to do so! Original Entry

    Read the article

  • [Windows 8] Update TextBox’s binding on TextChanged

    - by Benjamin Roux
    Since UpdateSourceTrigger is not available in WinRT we cannot update the text’s binding of a TextBox at will (or at least not easily) especially when using MVVM (I surely don’t want to write behind-code to do that in each of my apps !). Since this kind of demand is frequent (for example to disable of button if the TextBox is empty) I decided to create some attached properties to to simulate this missing behavior. namespace Indeed.Controls { public static class TextBoxEx { public static string GetRealTimeText(TextBox obj) { return (string)obj.GetValue(RealTimeTextProperty); } public static void SetRealTimeText(TextBox obj, string value) { obj.SetValue(RealTimeTextProperty, value); } public static readonly DependencyProperty RealTimeTextProperty = DependencyProperty.RegisterAttached("RealTimeText", typeof(string), typeof(TextBoxEx), null); public static bool GetIsAutoUpdate(TextBox obj) { return (bool)obj.GetValue(IsAutoUpdateProperty); } public static void SetIsAutoUpdate(TextBox obj, bool value) { obj.SetValue(IsAutoUpdateProperty, value); } public static readonly DependencyProperty IsAutoUpdateProperty = DependencyProperty.RegisterAttached("IsAutoUpdate", typeof(bool), typeof(TextBoxEx), new PropertyMetadata(false, OnIsAutoUpdateChanged)); private static void OnIsAutoUpdateChanged(DependencyObject sender, DependencyPropertyChangedEventArgs e) { var value = (bool)e.NewValue; var textbox = (TextBox)sender; if (value) { Observable.FromEventPattern<TextChangedEventHandler, TextChangedEventArgs>( o => textbox.TextChanged += o, o => textbox.TextChanged -= o) .Do(_ => textbox.SetValue(TextBoxEx.RealTimeTextProperty, textbox.Text)) .Subscribe(); } } } } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } The code is composed of two attached properties. The first one “RealTimeText” reflects the text in real time (updated after each TextChanged event). The second one is only used to enable the functionality. To subscribe to the TextChanged event I used Reactive Extensions (Rx-Metro package in Nuget). If you’re not familiar with this framework just replace the code with a simple: textbox.TextChanged += textbox.SetValue(TextBoxEx.RealTimeTextProperty, textbox.Text); .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } To use these attached properties, it’s fairly simple <TextBox Text="{Binding Path=MyProperty, Mode=TwoWay}" ic:TextBoxEx.IsAutoUpdate="True" ic:TextBoxEx.RealTimeText="{Binding Path=MyProperty, Mode=TwoWay}" /> .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Just make sure to create a binding (in TwoWay) for both Text and RealTimeText. Hope this helps !

    Read the article

  • Windows Azure: Import/Export Hard Drives, VM ACLs, Web Sockets, Remote Debugging, Continuous Delivery, New Relic, Billing Alerts and More

    - by ScottGu
    Two weeks ago we released a giant set of improvements to Windows Azure, as well as a significant update of the Windows Azure SDK. This morning we released another massive set of enhancements to Windows Azure.  Today’s new capabilities include: Storage: Import/Export Hard Disk Drives to your Storage Accounts HDInsight: General Availability of our Hadoop Service in the cloud Virtual Machines: New VM Gallery, ACL support for VIPs Web Sites: WebSocket and Remote Debugging Support Notification Hubs: Segmented customer push notification support with tag expressions TFS & GIT: Continuous Delivery Support for Web Sites + Cloud Services Developer Analytics: New Relic support for Web Sites + Mobile Services Service Bus: Support for partitioned queues and topics Billing: New Billing Alert Service that sends emails notifications when your bill hits a threshold you define All of these improvements are now available to use immediately (note that some features are still in preview).  Below are more details about them. Storage: Import/Export Hard Disk Drives to Windows Azure I am excited to announce the preview of our new Windows Azure Import/Export Service! The Windows Azure Import/Export Service enables you to move large amounts of on-premises data into and out of your Windows Azure Storage accounts. It does this by enabling you to securely ship hard disk drives directly to our Windows Azure data centers. Once we receive the drives we’ll automatically transfer the data to or from your Windows Azure Storage account.  This enables you to import or export massive amounts of data more quickly and cost effectively (and not be constrained by available network bandwidth). Encrypted Transport Our Import/Export service provides built-in support for BitLocker disk encryption – which enables you to securely encrypt data on the hard drives before you send it, and not have to worry about it being compromised even if the disk is lost/stolen in transit (since the content on the transported hard drives is completely encrypted and you are the only one who has the key to it).  The drive preparation tool we are shipping today makes setting up bitlocker encryption on these hard drives easy. How to Import/Export your first Hard Drive of Data You can read our Getting Started Guide to learn more about how to begin using the import/export service.  You can create import and export jobs via the Windows Azure Management Portal as well as programmatically using our Server Management APIs. It is really easy to create a new import or export job using the Windows Azure Management Portal.  Simply navigate to a Windows Azure storage account, and then click the new Import/Export tab now available within it (note: if you don’t have this tab make sure to sign-up for the Import/Export preview): Then click the “Create Import Job” or “Create Export Job” commands at the bottom of it.  This will launch a wizard that easily walks you through the steps required: For more comprehensive information about Import/Export, refer to Windows Azure Storage team blog.  You can also send questions and comments to the [email protected] email address. We think you’ll find this new service makes it much easier to move data into and out of Windows Azure, and it will dramatically cut down the network bandwidth required when working on large data migration projects.  We hope you like it. HDInsight: 100% Compatible Hadoop Service in the Cloud Last week we announced the general availability release of Windows Azure HDInsight. HDInsight is a 100% compatible Hadoop service that allows you to easily provision and manage Hadoop clusters for big data processing in Windows Azure.  This release is now live in production, backed by an enterprise SLA, supported 24x7 by Microsoft Support, and is ready to use for production scenarios. HDInsight allows you to use Apache Hadoop tools, such as Pig and Hive, to process large amounts of data in Windows Azure Blob Storage. Because data is stored in Windows Azure Blob Storage, you can choose to dynamically create Hadoop clusters only when you need them, and then shut them down when they are no longer required (since you pay only for the time the Hadoop cluster instances are running this provides a super cost effective way to use them).  You can create Hadoop clusters using either the Windows Azure Management Portal (see below) or using our PowerShell and Cross Platform Command line tools: The import/export hard drive support that came out today is a perfect companion service to use with HDInsight – the combination allows you to easily ingest, process and optionally export a limitless amount of data.  We’ve also integrated HDInsight with our Business Intelligence tools, so users can leverage familiar tools like Excel in order to analyze the output of jobs.  You can find out more about how to get started with HDInsight here. Virtual Machines: VM Gallery Enhancements Today’s update of Windows Azure brings with it a new Virtual Machine gallery that you can use to create new VMs in the cloud.  You can launch the gallery by doing New->Compute->Virtual Machine->From Gallery within the Windows Azure Management Portal: The new Virtual Machine Gallery includes some nice enhancements that make it even easier to use: Search: You can now easily search and filter images using the search box in the top-right of the dialog.  For example, simply type “SQL” and we’ll filter to show those images in the gallery that contain that substring. Category Tree-view: Each month we add more built-in VM images to the gallery.  You can continue to browse these using the “All” view within the VM Gallery – or now quickly filter them using the category tree-view on the left-hand side of the dialog.  For example, by selecting “Oracle” in the tree-view you can now quickly filter to see the official Oracle supplied images. MSDN and Supported checkboxes: With today’s update we are also introducing filters that makes it easy to filter out types of images that you may not be interested in. The first checkbox is MSDN: using this filter you can exclude any image that is not part of the Windows Azure benefits for MSDN subscribers (which have highly discounted pricing - you can learn more about the MSDN pricing here). The second checkbox is Supported: this filter will exclude any image that contains prerelease software, so you can feel confident that the software you choose to deploy is fully supported by Windows Azure and our partners. Sort options: We sort gallery images by what we think customers are most interested in, but sometimes you might want to sort using different views. So we’re providing some additional sort options, like “Newest,” to customize the image list for what suits you best. Pricing information: We now provide additional pricing information about images and options on how to cost effectively run them directly within the VM Gallery. The above improvements make it even easier to use the VM Gallery and quickly create launch and run Virtual Machines in the cloud. Virtual Machines: ACL Support for VIPs A few months ago we exposed the ability to configure Access Control Lists (ACLs) for Virtual Machines using Windows PowerShell cmdlets and our Service Management API. With today’s release, you can now configure VM ACLs using the Windows Azure Management Portal as well. You can now do this by clicking the new Manage ACL command in the Endpoints tab of a virtual machine instance: This will enable you to configure an ordered list of permit and deny rules to scope the traffic that can access your VM’s network endpoints. For example, if you were on a virtual network, you could limit RDP access to a Windows Azure virtual machine to only a few computers attached to your enterprise. Or if you weren’t on a virtual network you could alternatively limit traffic from public IPs that can access your workloads: Here is the default behaviors for ACLs in Windows Azure: By default (i.e. no rules specified), all traffic is permitted. When using only Permit rules, all other traffic is denied. When using only Deny rules, all other traffic is permitted. When there is a combination of Permit and Deny rules, all other traffic is denied. Lastly, remember that configuring endpoints does not automatically configure them within the VM if it also has firewall rules enabled at the OS level.  So if you create an endpoint using the Windows Azure Management Portal, Windows PowerShell, or REST API, be sure to also configure your guest VM firewall appropriately as well. Web Sites: Web Sockets Support With today’s release you can now use Web Sockets with Windows Azure Web Sites.  This feature enables you to easily integrate real-time communication scenarios within your web based applications, and is available at no extra charge (it even works with the free tier).  Higher level programming libraries like SignalR and socket.io are also now supported with it. You can enable Web Sockets support on a web site by navigating to the Configure tab of a Web Site, and by toggling Web Sockets support to “on”: Once Web Sockets is enabled you can start to integrate some really cool scenarios into your web applications.  Check out the new SignalR documentation hub on www.asp.net to learn more about some of the awesome scenarios you can do with it. Web Sites: Remote Debugging Support The Windows Azure SDK 2.2 we released two weeks ago introduced remote debugging support for Windows Azure Cloud Services. With today’s Windows Azure release we are extending this remote debugging support to also work with Windows Azure Web Sites. With live, remote debugging support inside of Visual Studio, you are able to have more visibility than ever before into how your code is operating live in Windows Azure. It is now super easy to attach the debugger and quickly see what is going on with your application in the cloud. Remote Debugging of a Windows Azure Web Site using VS 2013 Enabling the remote debugging of a Windows Azure Web Site using VS 2013 is really easy.  Start by opening up your web application’s project within Visual Studio. Then navigate to the “Server Explorer” tab within Visual Studio, and click on the deployed web-site you want to debug that is running within Windows Azure using the Windows Azure->Web Sites node in the Server Explorer.  Then right-click and choose the “Attach Debugger” option on it: When you do this Visual Studio will remotely attach the debugger to the Web Site running within Windows Azure.  The debugger will then stop the web site’s execution when it hits any break points that you have set within your web application’s project inside Visual Studio.  For example, below I set a breakpoint on the “ViewBag.Message” assignment statement within the HomeController of the standard ASP.NET MVC project template.  When I hit refresh on the “About” page of the web site within the browser, the breakpoint was triggered and I am now able to debug the app remotely using Visual Studio: Note above how we can debug variables (including autos/watchlist/etc), as well as use the Immediate and Command Windows. In the debug session above I used the Immediate Window to explore some of the request object state, as well as to dynamically change the ViewBag.Message property.  When we click the the “Continue” button (or press F5) the app will continue execution and the Web Site will render the content back to the browser.  This makes it super easy to debug web apps remotely. Tips for Better Debugging To get the best experience while debugging, we recommend publishing your site using the Debug configuration within Visual Studio’s Web Publish dialog. This will ensure that debug symbol information is uploaded to the Web Site which will enable a richer debug experience within Visual Studio.  You can find this option on the Web Publish dialog on the Settings tab: When you ultimately deploy/run the application in production we recommend using the “Release” configuration setting – the release configuration is memory optimized and will provide the best production performance.  To learn more about diagnosing and debugging Windows Azure Web Sites read our new Troubleshooting Windows Azure Web Sites in Visual Studio guide. Notification Hubs: Segmented Push Notification support with tag expressions In August we announced the General Availability of Windows Azure Notification Hubs - a powerful Mobile Push Notifications service that makes it easy to send high volume push notifications with low latency from any mobile app back-end.  Notification hubs can be used with any mobile app back-end (including ones built using our Mobile Services capability) and can also be used with back-ends that run in the cloud as well as on-premises. Beginning with the initial release, Notification Hubs allowed developers to send personalized push notifications to both individual users as well as groups of users by interest, by associating their devices with tags representing the logical target of the notification. For example, by registering all devices of customers interested in a favorite MLB team with a corresponding tag, it is possible to broadcast one message to millions of Boston Red Sox fans and another message to millions of St. Louis Cardinals fans with a single API call respectively. New support for using tag expressions to enable advanced customer segmentation With today’s release we are adding support for even more advanced customer targeting.  You can now identify customers that you want to send push notifications to by defining rich tag expressions. With tag expressions, you can now not only broadcast notifications to Boston Red Sox fans, but take that segmenting a step farther and reach more granular segments. This opens up a variety of scenarios, for example: Offers based on multiple preferences—e.g. send a game day vegetarian special to users tagged as both a Boston Red Sox fan AND a vegetarian Push content to multiple segments in a single message—e.g. rain delay information only to users who are tagged as either a Boston Red Sox fan OR a St. Louis Cardinal fan Avoid presenting subsets of a segment with irrelevant content—e.g. season ticket availability reminder to users who are tagged as a Boston Red Sox fan but NOT also a season ticket holder To illustrate with code, consider a restaurant chain app that sends an offer related to a Red Sox vs Cardinals game for users in Boston. Devices can be tagged by your app with location tags (e.g. “Loc:Boston”) and interest tags (e.g. “Follows:RedSox”, “Follows:Cardinals”), and then a notification can be sent by your back-end to “(Follows:RedSox || Follows:Cardinals) && Loc:Boston” in order to deliver an offer to all devices in Boston that follow either the RedSox or the Cardinals. This can be done directly in your server backend send logic using the code below: var notification = new WindowsNotification(messagePayload); hub.SendNotificationAsync(notification, "(Follows:RedSox || Follows:Cardinals) && Loc:Boston"); In your expressions you can use all Boolean operators: AND (&&), OR (||), and NOT (!).  Some other cool use cases for tag expressions that are now supported include: Social: To “all my group except me” - group:id && !user:id Events: Touchdown event is sent to everybody following either team or any of the players involved in the action: Followteam:A || Followteam:B || followplayer:1 || followplayer:2 … Hours: Send notifications at specific times. E.g. Tag devices with time zone and when it is 12pm in Seattle send to: GMT8 && follows:thaifood Versions and platforms: Send a reminder to people still using your first version for Android - version:1.0 && platform:Android For help on getting started with Notification Hubs, visit the Notification Hub documentation center.  Then download the latest NuGet package (or use the Notification Hubs REST APIs directly) to start sending push notifications using tag expressions.  They are really powerful and enable a bunch of great new scenarios. TFS & GIT: Continuous Delivery Support for Web Sites + Cloud Services With today’s Windows Azure release we are making it really easy to enable continuous delivery support with Windows Azure and Team Foundation Services.  Team Foundation Services is a cloud based offering from Microsoft that provides integrated source control (with both TFS and Git support), build server, test execution, collaboration tools, and agile planning support.  It makes it really easy to setup a team project (complete with automated builds and test runners) in the cloud, and it has really rich integration with Visual Studio. With today’s Windows Azure release it is now really easy to enable continuous delivery support with both TFS and Git based repositories hosted using Team Foundation Services.  This enables a workflow where when code is checked in, built successfully on an automated build server, and all tests pass on it – I can automatically have the app deployed on Windows Azure with zero manual intervention or work required. The below screen-shots demonstrate how to quickly setup a continuous delivery workflow to Windows Azure with a Git-based ASP.NET MVC project hosted using Team Foundation Services. Enabling Continuous Delivery to Windows Azure with Team Foundation Services The project I’m going to enable continuous delivery with is a simple ASP.NET MVC project whose source code I’m hosting using Team Foundation Services.  I did this by creating a “SimpleContinuousDeploymentTest” repository there using Git – and then used the new built-in Git tooling support within Visual Studio 2013 to push the source code to it.  Below is a screen-shot of the Git repository hosted within Team Foundation Services: I can access the repository within Visual Studio 2013 and easily make commits with it (as well as branch, merge and do other tasks).  Using VS 2013 I can also setup automated builds to take place in the cloud using Team Foundation Services every time someone checks in code to the repository: The cool thing about this is that I don’t have to buy or rent my own build server – Team Foundation Services automatically maintains its own build server farm and can automatically queue up a build for me (for free) every time someone checks in code using the above settings.  This build server (and automated testing) support now works with both TFS and Git based source control repositories. Connecting a Team Foundation Services project to Windows Azure Once I have a source repository hosted in Team Foundation Services with Automated Builds and Testing set up, I can then go even further and set it up so that it will be automatically deployed to Windows Azure when a source code commit is made to the repository (assuming the Build + Tests pass).  Enabling this is now really easy.  To set this up with a Windows Azure Web Site simply use the New->Compute->Web Site->Custom Create command inside the Windows Azure Management Portal.  This will create a dialog like below.  I gave the web site a name and then made sure the “Publish from source control” checkbox was selected: When we click next we’ll be prompted for the location of the source repository.  We’ll select “Team Foundation Services”: Once we do this we’ll be prompted for our Team Foundation Services account that our source repository is hosted under (in this case my TFS account is “scottguthrie”): When we click the “Authorize Now” button we’ll be prompted to give Windows Azure permissions to connect to the Team Foundation Services account.  Once we do this we’ll be prompted to pick the source repository we want to connect to.  Starting with today’s Windows Azure release you can now connect to both TFS and Git based source repositories.  This new support allows me to connect to the “SimpleContinuousDeploymentTest” respository we created earlier: Clicking the finish button will then create the Web Site with the continuous delivery hooks setup with Team Foundation Services.  Now every time someone pushes source control to the repository in Team Foundation Services, it will kick off an automated build, run all of the unit tests in the solution , and if they pass the app will be automatically deployed to our Web Site in Windows Azure.  You can monitor the history and status of these automated deployments using the Deployments tab within the Web Site: This enables a really slick continuous delivery workflow, and enables you to build and deploy apps in a really nice way. Developer Analytics: New Relic support for Web Sites + Mobile Services With today’s Windows Azure release we are making it really easy to enable Developer Analytics and Monitoring support with both Windows Azure Web Site and Windows Azure Mobile Services.  We are partnering with New Relic, who provide a great dev analytics and app performance monitoring offering, to enable this - and we have updated the Windows Azure Management Portal to make it really easy to configure. Enabling New Relic with a Windows Azure Web Site Enabling New Relic support with a Windows Azure Web Site is now really easy.  Simply navigate to the Configure tab of a Web Site and scroll down to the “developer analytics” section that is now within it: Clicking the “add-on” button will display some additional UI.  If you don’t already have a New Relic subscription, you can click the “view windows azure store” button to obtain a subscription (note: New Relic has a perpetually free tier so you can enable it even without paying anything): Clicking the “view windows azure store” button will launch the integrated Windows Azure Store experience we have within the Windows Azure Management Portal.  You can use this to browse from a variety of great add-on services – including New Relic: Select “New Relic” within the dialog above, then click the next button, and you’ll be able to choose which type of New Relic subscription you wish to purchase.  For this demo we’ll simply select the “Free Standard Version” – which does not cost anything and can be used forever:  Once we’ve signed-up for our New Relic subscription and added it to our Windows Azure account, we can go back to the Web Site’s configuration tab and choose to use the New Relic add-on with our Windows Azure Web Site.  We can do this by simply selecting it from the “add-on” dropdown (it is automatically populated within it once we have a New Relic subscription in our account): Clicking the “Save” button will then cause the Windows Azure Management Portal to automatically populate all of the needed New Relic configuration settings to our Web Site: Deploying the New Relic Agent as part of a Web Site The final step to enable developer analytics using New Relic is to add the New Relic runtime agent to our web app.  We can do this within Visual Studio by right-clicking on our web project and selecting the “Manage NuGet Packages” context menu: This will bring up the NuGet package manager.  You can search for “New Relic” within it to find the New Relic agent.  Note that there is both a 32-bit and 64-bit edition of it – make sure to install the version that matches how your Web Site is running within Windows Azure (note: you can configure your Web Site to run in either 32-bit or 64-bit mode using the Web Site’s “Configuration” tab within the Windows Azure Management Portal): Once we install the NuGet package we are all set to go.  We’ll simply re-publish the web site again to Windows Azure and New Relic will now automatically start monitoring the application Monitoring a Web Site using New Relic Now that the application has developer analytics support with New Relic enabled, we can launch the New Relic monitoring portal to start monitoring the health of it.  We can do this by clicking on the “Add Ons” tab in the left-hand side of the Windows Azure Management Portal.  Then select the New Relic add-on we signed-up for within it.  The Windows Azure Management Portal will provide some default information about the add-on when we do this.  Clicking the “Manage” button in the tray at the bottom will launch a new browser tab and single-sign us into the New Relic monitoring portal associated with our account: When we do this a new browser tab will launch with the New Relic admin tool loaded within it: We can now see insights into how our app is performing – without having to have written a single line of monitoring code.  The New Relic service provides a ton of great built-in monitoring features allowing us to quickly see: Performance times (including browser rendering speed) for the overall site and individual pages.  You can optionally set alert thresholds to trigger if the speed does not meet a threshold you specify. Information about where in the world your customers are hitting the site from (and how performance varies by region) Details on the latency performance of external services your web apps are using (for example: SQL, Storage, Twitter, etc) Error information including call stack details for exceptions that have occurred at runtime SQL Server profiling information – including which queries executed against your database and what their performance was And a whole bunch more… The cool thing about New Relic is that you don’t need to write monitoring code within your application to get all of the above reports (plus a lot more).  The New Relic agent automatically enables the CLR profiler within applications and automatically captures the information necessary to identify these.  This makes it super easy to get started and immediately have a rich developer analytics view for your solutions with very little effort. If you haven’t tried New Relic out yet with Windows Azure I recommend you do so – I think you’ll find it helps you build even better cloud applications.  Following the above steps will help you get started and deliver you a really good application monitoring solution in only minutes. Service Bus: Support for partitioned queues and topics With today’s release, we are enabling support within Service Bus for partitioned queues and topics. Enabling partitioning enables you to achieve a higher message throughput and better availability from your queues and topics. Higher message throughput is achieved by implementing multiple message brokers for each partitioned queue and topic.  The  multiple messaging stores will also provide higher availability. You can create a partitioned queue or topic by simply checking the Enable Partitioning option in the custom create wizard for a Queue or Topic: Read this article to learn more about partitioned queues and topics and how to take advantage of them today. Billing: New Billing Alert Service Today’s Windows Azure update enables a new Billing Alert Service Preview that enables you to get proactive email notifications when your Windows Azure bill goes above a certain monetary threshold that you configure.  This makes it easier to manage your bill and avoid potential surprises at the end of the month. With the Billing Alert Service Preview, you can now create email alerts to monitor and manage your monetary credits or your current bill total.  To set up an alert first sign-up for the free Billing Alert Service Preview.  Then visit the account management page, click on a subscription you have setup, and then navigate to the new Alerts tab that is available: The alerts tab allows you to setup email alerts that will be sent automatically once a certain threshold is hit.  For example, by clicking the “add alert” button above I can setup a rule to send myself email anytime my Windows Azure bill goes above $100 for the month: The Billing Alert Service will evolve to support additional aspects of your bill as well as support multiple forms of alerts such as SMS.  Try out the new Billing Alert Service Preview today and give us feedback. Summary Today’s Windows Azure release enables a ton of great new scenarios, and makes building applications hosted in the cloud even easier. If you don’t already have a Windows Azure account, you can sign-up for a free trial and start using all of the above features today.  Then visit the Windows Azure Developer Center to learn more about how to build apps with it. Hope this helps, Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

  • The Interaction between Three-Tier Client/Server Model and Three-Tier Application Architecture Model

    The three-tier client/server model is a network architectural approach currently used in modern networking. This approach divides a network in to three distinct components. Three-Tier Client/Server Model Components Client Component Server Component Database Component The Client Component of the network typically represents any device on the network. A basic example of this would be computer or another network/web enabled devices that are connected to a network. Network clients request resources on the network, and are usually equipped with a user interface for the presentation of the data returned from the Server Component. This process is done through the use of various software clients, and example of this can be seen through the use of a web browser client. The web browser request information from the Server Component located on the network and then renders the results for the user to process. The Server Components of the network return data based on specific client request back to the requesting client.  Server Components also inherit the attributes of a Client Component in that they are a device on the network and that they can also request information from other Server Components. However what differentiates a Client Component from a Server Component is that a Server Component response to requests from devices on the network. An example of a Server Component can be seen in a web server. A web server listens for new requests and then interprets the request, processes the web pages, and then returns the processed data back to the web browser client so that it may render the data for the user to interpret. The Database Component of the network returns unprocessed data from databases or other resources. This component also inherits attributes from the Server Component in that it is a device on a network, it can request information from other server components and database components, and it also listens for new requests so that it can return data when needed. The three-tier client/server model is very similar to the three-tier application architecture model, and in fact the layers can be mapped to one another. Three-Tier Application Architecture Model Presentation Layer/Logic Business Layer/Logic Data Layer/Logic The Presentation Layer including its underlying logic is very similar to the Client Component of the three-tiered model. The Presentation Layer focuses on interpreting the data returned by the Business Layer as well as presents the data back to the user.  Both the Presentation Layer and the Client Component focus primarily on the user and their experience. This allows for segments of the Business Layer to be distributable and interchangeable because the Presentation Layer is not directly integrated in with Business Layer. The Presentation Layer does not care where the data comes from as long as it is in the proper format. This allows for the Presentation Layer and Business Layer to be stored on one or more different servers so that it can provide a higher availability to clients requesting data. A good example of this is a web site that uses load balancing. When a web site decides to take on the task of load balancing they must obtain a network device that sits in front of a one or machines in order to distribute the request across multiple servers. When a user comes in through the load balanced device they are redirected to a specific server based on a few factors. Common Load Balancing Factors Current Server Availability Current Server Response Time Current Server Priority The Business Layer and corresponding logic are business rules applied to data prior to it being sent to the Presentation Layer. These rules are used to manipulate the data coming from the Data Access Layer, in addition to validating any data prior to being stored in the Data Access Layer. A good example of this would be when a user is trying to create multiple accounts under one email address. The Business Layer logic can prevent duplicate accounts by enforcing a unique email for every new account before the data is even stored in the Data Access Layer. The Server Component can be directly tied to this layer in that the server typically stores and process the Business Layer before it is returned to the end-user via the Presentation Layer. In addition the Server Component can also run automated process through the Business Layer on the data in the Data Access Layer so that additional business analysis can be derived from the data that has been already collected. The Data Layer and its logic are responsible for storing information so that it can be easily retrieved. Typical in most modern applications data is stored in a database management system however data can also be in the form of files stored on a file server. In addition a database can take on one of several forms. Common Database Formats XML File Pipe Delimited File Tab Delimited File Comma Delimited File (CSV) Plain Text File Microsoft Access Microsoft SQL Server MySql Oracle Sybase The Database component of the Networking model can be directly tied to the Data Layer because this is where the Data Layer obtains the data to return back the Business Layer. The Database Component basically allows for a place on the network to store data for future use. This enables applications to save data when they can and then quickly recall the saved data as needed so that the application does not have to worry about storing the data in memory. This prevents overhead that could be created when an application must retain all data in memory. As you can see the Three-Tier Client/Server Networking Model and the Three-Tiered Application Architecture Model rely very heavily on one another to function especially if different aspects of an application are distributed across an entire network. The use of various servers and database servers are wonderful when an application has a need to distribute work across the network. Network Components and Application Layers Interaction Database components will store all data needed for the Data Access Layer to manipulate and return to the Business Layer Server Component executes the Business Layer that manipulates data so that it can be returned to the Presentation Layer Client Component hosts the Presentation Layer that  interprets the data and present it to the user

    Read the article

  • Testing Mobile Websites with Adobe Shadow

    - by dwahlin
    It’s no surprise that mobile development is all the rage these days. With all of the new mobile devices being released nearly every day the ability for developers to deliver mobile solutions is more important than ever. Nearly every developer or company I’ve talked to recently about mobile development in training classes, at conferences, and on consulting projects says that they need to find a solution to get existing websites into the mobile space. Although there are several different frameworks out there that can be used such as jQuery Mobile, Sencha Touch, jQTouch, and others, how do you test how your site renders on iOS, Android, Blackberry, Windows Phone, and the variety of mobile form factors out there? Although there are different virtual solutions that can be used including Electric Plum for iOS, emulators, browser plugins for resizing the laptop/desktop browser, and more, at some point you need to test on as many physical devices as possible. This can be extremely challenging and quite time consuming though especially when you consider that you have to manually enter URLs into devices and click links on each one to drill-down into sites. Adobe Labs just released a product called Adobe Shadow (thanks to Kurt Sprinzl for letting me know about it) that significantly simplifies testing sites on physical devices, debugging problems you find, and even making live modifications to HTML and CSS content while viewing a site on the device to see how rendering changes. You can view a page in your laptop/desktop browser and have it automatically pushed to all of your devices without actually touching the device (a huge time saver). See a problem with a device? Locate it using the free Chrome extension, pull up inspection tools (based on the Chrome Developer tools) and make live changes through Chrome that appear on the respective device so that it’s easy to identify how problems can be resolved. I’ve been using Adobe Shadow and am very impressed with the amount of time saved and the different features that it offers. In the rest of the post I’ll walk through how to get it installed, get it started, and use it to view and debug pages.   Getting Adobe Shadow Installed The following steps can be used to get Adobe Shadow installed: 1. Download and install Adobe Shadow on your laptop/desktop 2. Install the Adobe Shadow extension for Chrome 3. Install the Adobe Shadow app on all of your devices (you can find it in various app stores) 4. Connect your devices to Wifi. Make sure they’re on the same network that your laptop/desktop machine is on   Getting Adobe Shadow Started Once Adobe Shadow is installed, you’ll need to get it running on your laptop/desktop and on all your mobile devices. The following steps walk through that process: 1. Start the Adobe Shadow application on your laptop/desktop 2. Start the Adobe Shadow app on each of your mobile devices 3. Locate the laptop/desktop name in the list that’s shown on each mobile device: 4. Select the laptop/desktop name and a passcode will be shown: 5. Open the Adobe Shadow Chrome extension on the laptop/desktop and enter the passcode for the given device: Using Adobe Shadow to View and Modify Pages Once Adobe Shadow is up and running on your laptop/desktop and on all of your mobile devices you can navigate to a page in Chrome on the laptop/desktop and it will automatically be pushed out to all connected mobile devices. If you have 5 mobile devices setup they’ll all navigate to the page displayed in Chrome (pretty awesome!). This makes it super easy to see how a given page looks on your iPad, Android device, etc. without having to touch the device itself. If you find a problem with a page on a device you can select the device in the Chrome Adobe Shadow extension on your laptop/desktop and select the remote inspector icon (it’s the < > icon): This will pull up the Adobe Shadow remote debugging window which contains the standard Chrome Developer tool tabs such as Elements, Resources, Network, etc. Click on the Elements tab to see the HTML rendered for the target device and then drill into the respective HTML content, CSS styles, etc. As HTML elements are selected in the Adobe Shadow debugging tool they’ll be highlighted on the device itself just like they would if you were debugging a page directly in Chrome with the developer tools. Here’s an example from my Android device that shows how the page looks on the device as I select different HTML elements on the laptop/desktop: Conclusion I’m really impressed with what I’ve to this point from Adobe Shadow. Controlling pages that display on devices directly from my laptop/desktop is a big time saver and the ability to remotely see changes made through the Chrome Developer Tools (on my laptop/desktop) really pushes the tool over the top. If you’re developing mobile applications it’s definitely something to check out. It’s currently free to download and use. For additional details check out the video below:  

    Read the article

  • Hack Extension Files to Make Them Version-Compatible for Firefox

    - by Asian Angel
    A well known drawback in using Firefox is the problem with extension compatibility when a new major version is released. Whether it is for a new extension that you are trying for the first time or an old favorite we have a way to get those extensions working for you again. There are multiple reasons why you might want to choose this method to fix a non-compatible extension: You are uncomfortable with tweaking the “about:config” settings You prefer to maintain the original “about:config” settings in a pristine state and like having compatibility checking active You are looking to gain some “geek cred” Keep in mind that most extensions will work perfectly well with a new version of Firefox and simply have the “version compatibility number” problem. But once in a while there may be one that needs to have some work done on it by the extension’s author. The Problem Here is a perfect example of everyone’s least favorite “extension message”. This is the last thing that you need when all that you want is for your favorite extension (or a new one) to work on a fresh clean install. Note: This works nicely to “replace” non-compatible extensions already present in your browser if you are simply upgrading. Hacking the XPI File For this procedure you will need to manually download the extension to your hard-drive (right click on the extension’s “Install Button” and select “Save As”). Once you have done that you are ready to start hacking the extension. For our example we chose the “GCal Popup Extension”. The best thing to do is place the extension in a new folder (i.e. the Desktop or other convenient location) then unzip it just the same way that you would with any regular zip file. Once it is unzipped you will see the various folders and files that were in the “xpi file” (we had four files here but depending on the extension the number may vary). There is only one file that you need to focus on…the “install.rdf” file. Note: At this point you should move the original extension file to a different location (i.e. outside of the folder) so that it is no longer present. Open the file in “Notepad” so that you can change the number for the “maxVersion”. Here the number is listed as “3.5.*” but we needed to make it higher… Replacing the “5” with a “7” is all that we needed to do. Once you have entered your new “maxVersion” number save the file. At this point you will need to re-zip all of the files back into a single file. Make certain that you “create” a file with the “.zip file extension” otherwise this will not work. Once you have the new zip file created you will need to rename the entire file including the “file extension”. For our example we copied and pasted the original extension name. Once you have changed the name click outside of the “text area”. You will see a small message window like this asking for confirmation…click “Yes” to finish the process. Now your modified/updated extension is ready to install. Drag the extension into your browser to install it and watch that wonderful “Restart to complete the installation.” message appear. As soon as your browser starts you can check the “Add-ons Manager Window” and see the version compatibility numbers for the extension. Looking very very nice! And just like that your extension should be up and running without any problems. Conclusion If you are looking to try something new, gain some geek cred, or just want to keep your Firefox install as close to the original condition as possible this method should get those extensions working nicely for you again. Similar Articles Productive Geek Tips Make Firefox Extensions Compatible After Firefox Update Breaks Them For No Good ReasonCheck Extension Compatibility for Upcoming Firefox ReleasesFirefox 3.6 Release Candidate Available, Here’s How to Fix Your Incompatible ExtensionsHow To Force Extension Compatibility with Firefox 3.6+Test and Report Add-on Compatibility in Firefox TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 PCmover Professional 15 Great Illustrations by Chow Hon Lam Easily Sync Files & Folders with Friends & Family Amazon Free Kindle for PC Download Stretch popurls.com with a Stylish Script (Firefox) OldTvShows.org – Find episodes of Hitchcock, Soaps, Game Shows and more Download Microsoft Office Help tab

    Read the article

  • Silverlight 5 &ndash; What&rsquo;s New? (Including Screenshots &amp; Code Snippets)

    - by mbcrump
    Silverlight 5 is coming next year (2011) and this blog post will tell you what you need to know before the beta ships. First, let me address people saying that it is dead after PDC 2010. I believe that it’s best to see what the market is doing, not the vendor. Below is a list of companies that are developing Silverlight 4 applications shown during the Silverlight Firestarter. Some of the companies have shipped and some haven’t. It’s just great to see the actual company names that are working on Silverlight instead of “people are developing for Silverlight”. The next thing that I wanted to point out was that HTML5, WPF and Silverlight can co-exist. In case you missed Scott Gutherie’s keynote, they actually had a slide with all three stacked together. This shows Microsoft will be heavily investing in each technology.  Even I, a Silverlight developer, am reading Pro HTML5. Microsoft said that according to the Silverlight Feature Voting site, 21k votes were entered. Microsoft has implemented about 70% of these votes in Silverlight 5. That is an amazing number, and I am crossing my fingers that Microsoft bundles Silverlight with Windows 8. Let’s get started… what’s new in Silverlight 5? I am going to show you some great application and actual code shown during the Firestarter event. Media Hardware Video Decode – Instead of using CPU to decode, we will offload it to GPU. This will allow netbooks, etc to play videos. Trickplay – Variable Speed Playback – Pitch Correction (If you speed up someone talking they won’t sound like a chipmunk). Power Management – Less battery when playing video. Screensavers will no longer kick in if watching a video. If you pause a video then screensaver will kick in. Remote Control Support – This will allow users to control playback functions like Pause, Rewind and Fastforward. IIS Media Services 4 has shipped and now supports Azure. Data Binding Layout Transitions – Just with a few lines of XAML you can create a really rich experience that is not using Storyboards or animations. RelativeSource FindAncestor – Ancestor RelativeSource bindings make it much easier for a DataTemplate to bind to a property on a container control. Custom Markup Extensions – Markup extensions allow code to be run at XAML parse time for both properties and event handlers. This is great for MVVM support. Changing Styles during Runtime By Binding in Style Setters – Changing Styles at runtime used to be a real pain in Silverlight 4, now it’s much easier. Binding in style setters allows bindings to reference other properties. XAML Debugging – Below you can see that we set a breakpoint in XAML. This shows us exactly what is going on with our binding.  WCF & RIA Services WS-Trust Support – Taken from Wikipedia: WS-Trust is a WS-* specification and OASIS standard that provides extensions to WS-Security, specifically dealing with the issuing, renewing, and validating of security tokens, as well as with ways to establish, assess the presence of, and broker trust relationships between participants in a secure message exchange. You can reduce network latency by using a background thread for networking. Supports Azure now.  Text and Printing Improved text clarity that enables better text rendering. Multi-column text flow, Character tracking and leading support, and full OpenType font support.  Includes a new Postscript Vector Printing API that provides control over what you print . Pivot functionality baked into Silverlight 5 SDK. Graphics Immediate mode graphics support that will enable you to use the GPU and 3D graphics supports. Take a look at what was shown in the demos below. 1) 3D view of the Earth – not really a real-world application though. A doctor’s portal. This demo really stood out for me as it shows what we can do with the 3D / GPU support. Out of Browser OOB applications can now create and manage childwindows as shown in the screenshot below.  Trusted OOB applications can use P/Invoke to call Win32 APIs and unmanaged libraries.  Enterprise Group Policy Support allow enterprises to lock down or up the sandbox capabilities of Silverlight 5 applications. In this demo, he tore the “notes” off of the application and it appeared in a new window. See the black arrow below. In this demo, he connected a USB Device which fired off a local Win32 application that provided the data off the USB stick to Silverlight. Another demo of a Silverlight 5 application exporting data right into Excel running inside of browser. Testing They demoed Coded UI, which is available now in the Visual Studio Feature Pack 2. This will allow you to create automated testing without writing any code manually. Performance: Microsoft has worked to improve the Silverlight startup time. Silverlight 5 provides 64-bit browser support.  Silverlight 5 also provides IE9 Hardware acceleration.   I am looking forward to Silverlight 5 and I hope you are too. Thanks for reading and I hope you visit again soon.  Subscribe to my feed CodeProject

    Read the article

  • Changing enum in a different class for screen

    - by user2434321
    I'm trying to make a start menu for my game and my code uses Enum's to moniter the screen state. Now i want to change the screenstate declared in the main class, in my Background class Screen screen = new Screen(); is declared in the Game1 class Background(ref screen); This is in the update method for the Background Class KeyboardState keystate = Keyboard.GetState(); switch (screen) { case Screen.Start: if (isPressed && keystate.IsKeyUp(Keys.Up) && keystate.IsKeyUp(Keys.Down) && keystate.IsKeyUp(Keys.Enter)) { isPressed = false; } if (keystate.IsKeyDown(Keys.Down) && isPressed != true) { if (menuState == MenuState.Options) menuState = MenuState.Credits; if (menuState == MenuState.Play) menuState = MenuState.Options; isPressed = true; } if (keystate.IsKeyDown(Keys.Up) && isPressed != true) { if (menuState == MenuState.Options) menuState = MenuState.Play; if (menuState == MenuState.Credits) menuState = MenuState.Options; isPressed = true; } switch (menuState) { case MenuState.Play: arrowRect.X = 450; arrowRect.Y = 220; if (keystate.IsKeyDown(Keys.Enter) && isPressed != true) screen = Screen.Play; break; case MenuState.Options: arrowRect.X = 419; arrowRect.Y = 340; if (keystate.IsKeyDown(Keys.Enter) && isPressed != true) screen = Screen.Options; break; case MenuState.Credits: arrowRect.X = 425; arrowRect.Y = 460; if (keystate.IsKeyDown(Keys.Enter) && isPressed != true) screen = Screen.Credits; break; } break; } } For some reason when I play this and I hit the enter button the Background class's screen is changed but the main class's screen isn't how can i change this? EDIT 1* class Background { private Texture2D background; private Rectangle backgroundRect; private Texture2D arrow; private Rectangle arrowRect; private Screen screen; private MenuState menuState; private bool isPressed = false; public Screen getScreenState(ref Screen screen) { this.screen = screen; return this.screen; } public Background(ref Screen screen) { this.screen = screen; } public void Update() { KeyboardState keystate = Keyboard.GetState(); switch (screen) { case Screen.Start: if (isPressed && keystate.IsKeyUp(Keys.Up) && keystate.IsKeyUp(Keys.Down) && keystate.IsKeyUp(Keys.Enter)) { isPressed = false; } if (keystate.IsKeyDown(Keys.Down) && isPressed != true) { if (menuState == MenuState.Options) menuState = MenuState.Credits; if (menuState == MenuState.Play) menuState = MenuState.Options; isPressed = true; } if (keystate.IsKeyDown(Keys.Up) && isPressed != true) { if (menuState == MenuState.Options) menuState = MenuState.Play; if (menuState == MenuState.Credits) menuState = MenuState.Options; isPressed = true; } switch (menuState) { case MenuState.Play: arrowRect.X = 450; arrowRect.Y = 220; if (keystate.IsKeyDown(Keys.Enter) && isPressed != true) screen = Screen.Play; break; case MenuState.Options: arrowRect.X = 419; arrowRect.Y = 340; if (keystate.IsKeyDown(Keys.Enter) && isPressed != true) screen = Screen.Options; break; case MenuState.Credits: arrowRect.X = 425; arrowRect.Y = 460; if (keystate.IsKeyDown(Keys.Enter) && isPressed != true) screen = Screen.Credits; break; } break; case Screen.Pause: break; case Screen.Over: break; } } public void LoadStartContent(ContentManager Content, GraphicsDeviceManager graphics) { background = Content.Load<Texture2D>("startBackground"); arrow = Content.Load<Texture2D>("arrow"); backgroundRect = new Rectangle(0, 0, graphics.GraphicsDevice.Viewport.Width, graphics.GraphicsDevice.Viewport.Height); arrowRect = new Rectangle(450, 225, arrow.Width, arrow.Height); screen = Screen.Start; } public void LoadPlayContent(ContentManager Content, GraphicsDeviceManager graphics) { background = Content.Load<Texture2D>("Background"); backgroundRect = new Rectangle(0, 0, graphics.GraphicsDevice.Viewport.Width, graphics.GraphicsDevice.Viewport.Height); screen = Screen.Play; } public void LoadOverContent(ContentManager Content, GraphicsDeviceManager graphics) { } public void Draw(SpriteBatch spritebatch) { if (screen == Screen.Start) { spritebatch.Draw(background, backgroundRect, Color.White); spritebatch.Draw(arrow, arrowRect, Color.White); } else spritebatch.Draw(background, backgroundRect, Color.White); } } Thats my background class!

    Read the article

  • Oracle Enterprise Manager Ops Center 12c : Enterprise Controller High Availability (EC HA)

    - by Anand Akela
    Contributed by Mahesh sharma, Oracle Enterprise Manager Ops Center team In Oracle Enterprise Manager Ops Center 12c we introduced a new feature to make the Enterprise Controllers highly available. With EC HA if the hardware crashes, or if the Enterprise Controller services and/or the remote database stop responding, then the enterprise services are immediately restarted on the other standby Enterprise Controller without administrative intervention. In today's post, I'll briefly describe EC HA, look at some of the prerequisites and then show some screen shots of how the Enterprise Controller is represented in the BUI. In my next post, I'll show you how to install the EC in a HA environment and some of the new commands. What is EC HA? Enterprise Controller High Availability (EC HA) provides an active/standby fail-over solution for two or more Ops Center Enterprise Controllers, all within an Oracle Clusterware framework. This allows EC resources to relocate to a standby if the hardware crashes, or if certain services fail. It is also possible to manually relocate the services if maintenance on the active EC is required. When the EC services are relocated to the standby, EC services are interrupted only for the period it takes for the EC services to stop on the active node and to start back up on a standby node. What are the prerequisites? To install EC in a HA framework an understanding of the prerequisites are required. There are many possibilities on how these prerequisites can be installed and configured - we will not discuss these in this post. However, best practices should be applied when installing and configuring, I would suggest that you get expert help if you are not familiar with them. Lets briefly look at each of these prerequisites in turn: Hardware : Servers are required to host the active and standby node(s). As the nodes will be in a clustered environment, they need to be the same model and configured identically. The nodes should have the same processor class, number of cores, memory, network cards, for example. Operating System : We can use Solaris 10 9/10 or higher, Solaris 11, OEL 5.5 or higher on x86 or Sparc Network : There are a number of requirements for network cards in clusterware, and cables should be networked identically on all the nodes. We must also consider IP allocation for public / private and Virtual IP's (VIP's). Storage : Shared storage will be required for the cluster voting disks, Oracle Cluster Register (OCR) and the EC's libraries. Clusterware : Oracle Clusterware version 11.2.0.3 or later is required. This can be downloaded from: http://www.oracle.com/technetwork/database/enterprise-edition/downloads/index.html Remote Database : Oracle RDBMS 11.1.0.x or later is required. This can be downloaded from: http://www.oracle.com/technetwork/database/enterprise-edition/downloads/index.html For detailed information on how to install EC HA , please read : http://docs.oracle.com/cd/E27363_01/doc.121/e25140/install_config-shared.htm#OPCSO242 For detailed instructions on installing Oracle Clusterware, please read : http://docs.oracle.com/cd/E11882_01/install.112/e17214/chklist.htm#BHACBGII For detailed instructions on installing the remote Oracle database have a read of: http://www.oracle.com/technetwork/database/enterprise-edition/documentation/index.html The schematic diagram below gives a visual view of how the prerequisites are connected. When a fail-over occurs the Enterprise Controller resources and the VIP are relocated to one of the standby nodes. The standby node then becomes active and all Ops Center services are resumed. Connecting to the Enterprise Controller from your favourite browser. Let's presume we have installed and configured all the prerequisites, and installed Ops Center on the active and standby nodes. We can now connect to the active node from a browser i.e. http://<active_node1>/, this will redirect us to the virtual IP address (VIP). The VIP is the IP address that moves with the Enterprise Controller resource. Once you log on and view the assets, you will see some new symbols, these represent that the nodes are cluster members, with one being an active member and the other a standby member in this case. If you connect to the standby node, the browser will redirect you to a splash page, indicating that you have connected to the standby node. Hope you find this topic interesting. Next time I will post about how to install the Enterprise Controller in the HA frame work. Stay Connected: Twitter |  Face book |  You Tube |  Linked in |  Newsletter

    Read the article

  • Understanding each other in web development

    - by Pete Hotchkin
    During my career I have been lucky enough to work in several different roles within web development with many extremely talented people, from incredible designers who were passionate about the placement of every pixel right through to server administrators and DBAs who were always measuring the improvements they were making to their queries in the smallest possible unit. The problem I always faced was that more often than not I was stuck in the middle trying to mediate between these different functions and enable each side to understand the other’s point of view. The main areas of contention that there have always been between these functional groups in my experience have been at 2 key points: during the build phase and then when there is a problem post-build. During both of these times it is often easier for someone to pass the buck onto someone else than spend the time to understand the other person’s perspective. Below is a quick look at two upcoming tools that will not only speed up the build phase for each function, but  also help when it comes to the issues faced once a site has been pushed live. In my experience a web project goes through several phases of development. The first of these is design, generally handled as Photoshop files which are then passed onto a front-end developer. This is the first point at which heated discussions can arise. One problem I’ve seen several times is that the designer doesn’t fully understand the platform constraints that need to be considered, and as a result has designed something that does not translate very well or is simply not possible. Working at Red Gate, I am lucky enough to be able to meet some amazing people and this happened just the other day when I was introduced to Neil Kinnish and Pete Nelson, the creators of what I believe could be a great asset in this designer-developer relationship, Mixture. Mixture allows the front end developer to quickly prototype a web page with built-in frameworks such as bootstrap. It’s not an IDE however, it just sits there in the background and monitors the project files in the background so every time you save a file from your favorite IDE, it will compile things like LESS, compact your JavaScript and the automatically refresh your test browser so you can see the changes instantly. I think one of the best parts of this however is a single button that pushes the changed files up to the web so the designer can instantly see how far the developer has got and the problem that he is facing at that time without the need to spend time setting up a remote server. I can see this being a real asset to remote teams where there needs to be a compromise between the designer and the front-end developer, or just to allow the designer to see how the build is progressing and suggest small alterations. Once the design has been built into the front end the designer’s job is generally done and there are no other points of contention between the designer and the other functions involved in building these web projects. As the project moves into the stage of integrating it into the back end and deploying it to the production server other functions start to be pulled in and other issues arise such as the back-end developer understanding the frameworks that they are using such as the routes that are in place in an MVC application or the number of database calls that the ORM layer is actually making. There are many tools out there that can actually help with these problems such as mini profiler that gives you a quick snapshot of what is going on directly in the browser. For a slightly more in-depth look at what is happening and to gain a deeper understanding of an application you may be working on though, you may want to consider Glimpse. Created by Nik and Anthony, it is an application that sits at the bottom of your browser (installed via NuGet) which can show you information about how your application is pieced together and how the information on screen is being delivered as it happens. With a wealth of community-built plugins such as one for nHibernate and linq2SQL (full list of plugins on NuGet). It can be customized directly to your own setup to truly delve into the code to see what is happening, and can help to reduce the number of confusing moments about whether it is your code that is going wrong or whether there is something more sinister happening directly on the server. All the tools that I have mentioned in this post help to do one thing above all, and that is to ease the barrier of understanding between the different functions that are involved in building and maintaining a web application. In my experience it is very easy to say “Well, that’s not my problem”, simply because the two functions involved don’t truly understand the other’s point of view. Software should not only be seen as a way to streamline our own working process or as a debugging tool but also a communication aid to improve the entire lifecycle of a web project. Glimpse is actually the project that I am the designer on and I would love to get your feedback if you do decide to try it out or if you would like to share your own experiences of working on web projects please fill in your details at https://www.surveymk.com/s/joinGlimpse  or add a comment below and I will get in touch with you.

    Read the article

  • Doing unit and integration tests with the Web API HttpClient

    - by cibrax
    One of the nice things about the new HttpClient in System.Net.Http is the support for mocking responses or handling requests in a http server hosted in-memory. While the first option is useful for scenarios in which we want to test our client code in isolation (unit tests for example), the second one enables more complete integration testing scenarios that could include some more components in the stack such as model binders or message handlers for example.   The HttpClient can receive a HttpMessageHandler as argument in one of its constructors. public class HttpClient : HttpMessageInvoker { public HttpClient(); public HttpClient(HttpMessageHandler handler); public HttpClient(HttpMessageHandler handler, bool disposeHandler); .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } For the first scenario, you can create a new HttpMessageHandler that fakes the response, which you can use in your unit test. The only requirement is that you somehow inject an HttpClient with this custom handler in the client code. public class FakeHttpMessageHandler : HttpMessageHandler { HttpResponseMessage response; public FakeHttpMessageHandler(HttpResponseMessage response) { this.response = response; } protected override Task<HttpResponseMessage> SendAsync(HttpRequestMessage request, System.Threading.CancellationToken cancellationToken) { var tcs = new TaskCompletionSource<HttpResponseMessage>(); tcs.SetResult(response); return tcs.Task; } } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } In an unit test, you can do something like this. var fakeResponse = new HttpResponse(); var fakeHandler = new FakeHttpMessageHandler(fakeResponse); var httpClient = new HttpClient(fakeHandler); var customerService = new CustomerService(httpClient); // Do something // Asserts .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } CustomerService in this case is the class under test, and the one that receives an HttpClient initialized with our fake handler. For the second scenario in integration tests, there is a In-Memory host “System.Web.Http.HttpServer” that also derives from HttpMessageHandler and you can use with a HttpClient instance in your test. This has been discussed already in these two great posts from Pedro and Filip. 

    Read the article

  • [Windows 8] Application bar popup button

    - by Benjamin Roux
    Here is a small control to create an application bar button which will display a content in a popup when the button is clicked. Visually it gives this So how to create this? First you have to use the AppBarPopupButton control below.   namespace Indeed.Controls { public class AppBarPopupButton : Button { public FrameworkElement PopupContent { get { return (FrameworkElement)GetValue(PopupContentProperty); } set { SetValue(PopupContentProperty, value); } } public static readonly DependencyProperty PopupContentProperty = DependencyProperty.Register("PopupContent", typeof(FrameworkElement), typeof(AppBarPopupButton), new PropertyMetadata(null, (o, e) => (o as AppBarPopupButton).CreatePopup())); private Popup popup; private SerialDisposable sizeChanged = new SerialDisposable(); protected override void OnTapped(Windows.UI.Xaml.Input.TappedRoutedEventArgs e) { base.OnTapped(e); if (popup != null) { var transform = this.TransformToVisual(Window.Current.Content); var offset = transform.TransformPoint(default(Point)); sizeChanged.Disposable = PopupContent.ObserveSizeChanged().Do(_ => popup.VerticalOffset = offset.Y - (PopupContent.ActualHeight + 20)).Subscribe(); popup.HorizontalOffset = offset.X + 24; popup.DataContext = this.DataContext; popup.IsOpen = true; } } private void CreatePopup() { popup = new Popup { IsLightDismissEnabled = true }; popup.Closed += (o, e) => this.GetParentOfType<AppBar>().IsOpen = false; popup.ChildTransitions = new Windows.UI.Xaml.Media.Animation.TransitionCollection(); popup.ChildTransitions.Add(new Windows.UI.Xaml.Media.Animation.PopupThemeTransition()); var container = new Grid(); container.Children.Add(PopupContent); popup.Child = container; } } } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } The ObserveSizeChanged method is just an extension method which observe the SizeChanged event (using Reactive Extensions - Rx-Metro package in Nuget). If you’re not familiar with Rx, you can replace this line (and the SerialDisposable stuff) by a simple subscription to the SizeChanged event (using +=) but don’t forget to unsubscribe to it ! public static IObservable<Unit> ObserveSizeChanged(this FrameworkElement element) { return Observable.FromEventPattern<SizeChangedEventHandler, SizeChangedEventArgs>( o => element.SizeChanged += o, o => element.SizeChanged -= o) .Select(_ => Unit.Default); } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } The GetParentOfType extension method just retrieve the first parent of type (it’s a common extension method that every Windows 8 developer should have created !). You can of course tweak to control (for example if you want to center the content to the button or anything else) to fit your needs. How to use this control? It’s very simple, in an AppBar control just add it and define the PopupContent property. <ic:AppBarPopupButton Style="{StaticResource RefreshAppBarButtonStyle}" HorizontalAlignment="Left"> <ic:AppBarPopupButton.PopupContent> <Grid> [...] </Grid> </ic:AppBarPopupButton.PopupContent> </ic:AppBarPopupButton> .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } When the button is clicked the popup is displayed. When the popup is closed, the app bar is closed too. I hope this will help you !

    Read the article

  • Check Your Spelling, Grammar, and Style in Firefox and Chrome

    - by Matthew Guay
    Are you tired of making simple writing mistakes that get past your browser’s spell-check?  Here’s how you can get advanced grammar check and more in Firefox and Chrome with After the Deadline. Microsoft Word has spoiled us with grammar, syntax, and spell checking, but the default spell check in Firefox and Chrome still only does basic checks.  Even webapps like Google Docs don’t check more than basic spelling errors.  However, WordPress.com is an exception; it offers advanced spelling, grammar, and syntax checking with its After the Deadline proofing system.  This helps you keep from making embarrassing mistakes on your blog posts, and now, thanks to a couple free browser plugins, it can help you keep from making these mistakes in any website or webapp. After the Deadline in Google Chrome Add the After the Deadline extension (link below) to Chrome as usual. As soon as it’s installed, you’re ready to start improving your online writing.  To check spelling, grammar, and more, click the ABC button that you’ll now see at the bottom of most text boxes online. After a quick scan, grammar mistakes are highlighted in green, complex expressions and other syntax problems are highlighted in blue, and spelling mistakes are highlighted in red as would be expected.  Click on an underlined word to choose one of its recommended changes or ignore the suggestion. Or, if you want more explanation about what was wrong with that word or phrase, click Explain for more info. And, if you forget to run an After the Deadline scan before submitting a text entry, it will automatically check to make sure you still want to submit it.  Click Cancel to go back and check your writing first.   To change the After the Deadline settings, click its icon in the toolbar and select View Options.  Additionally, if you want to disable it on the site you’re on, you can click Disable on this site directly from the popup. From the settings page, you can choose extra things to check for such as double negatives and redundant phrases, as well as add sites and words to ignore. After the Deadline in Firefox Add the After the Deadline add-on to Firefox (link below) as normal. After the Deadline basically the same in Firefox as it does in Chrome.  Select the ABC icon in the lower right corner of textboxes to check them for problems, and After the Deadline will underline the problems as it did in Chrome.  To view a suggested change in Firefox, right-click on the underlined word and select the recommended change or ignore the suggestion. And, if you forget to check, you’ll see a friendly reminder asking if you’re sure you want to submit your text like it is. You can access the After the Deadline settings in Firefox from the menu bar.  Click Tools, then select AtD Preferences.  In Firefox, the settings are in a options dialog with three tabs, but it includes the same options as the Chrome settings page.  Here you can make After the Deadline as correction-happy as you like.   Conclusion The web has increasingly become an interactive place, and seldom does a day go by that we aren’t entering text in forms and comments that may stay online forever.  Even our insignificant tweets are being archived in the Library of Congress.  After the Deadline can help you make sure that your permanent internet record is as grammatically correct as possible.  Even though it doesn’t catch every problem, and even misses some spelling mistakes, it’s still a great help. Links Download the After the Deadline extension for Google Chrome Download the After the Deadline add-on for Firefox Similar Articles Productive Geek Tips Quick Tip: Disable Favicons in FirefoxStupid Geek Tricks: Duplicate a Tab with a Shortcut Key in Chrome or FirefoxHow to Disable the New Geolocation Feature in Google ChromeStupid Geek Tricks: Compare Your Browser’s Memory Usage with Google ChromeStop YouTube Videos from Automatically Playing in Chrome TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Acronis Online Backup DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows Easily Search Food Recipes With Recipe Chimp Tech Fanboys Field Guide Check these Awesome Chrome Add-ons iFixit Offers Gadget Repair Manuals Online Vista style sidebar for Windows 7 Create Nice Charts With These Web Based Tools

    Read the article

  • Demystified - BI in SharePoint 2010

    - by Sahil Malik
    Ad:: SharePoint 2007 Training in .NET 3.5 technologies (more information). Frequently, my clients ask me if there is a good guide on deciphering the seemingly daunting choice of products from Microsoft when it comes to business intelligence offerings in a SharePoint 2010 world. These are all described in detail in my book, but here is a one (well maybe two) page executive overview. Microsoft Excel: Yes, Microsoft Excel! Your favorite and most commonly used in the world database. No it isn’t a database in technical pure definitions, but this is the most commonly used ‘database’ in the world. You will find many business users craft up very compelling excel sheets with tonnes of logic inside them. Good for: Quick Ad-Hoc reports. Excel 64 bit allows the possibility of very large datasheets (Also see 32 bit vs 64 bit Office, and PowerPivot Add-In below). Audience: End business user can build such solutions. Related technologies: PowerPivot, Excel Services Microsoft Excel with PowerPivot Add-In: The powerpivot add-in is an extension to Excel that adds support for large-scale data. Think of this as Excel with the ability to deal with very large amounts of data. It has an in-memory data store as an option for Analysis services. Good for: Ad-hoc reporting and logic with very large amounts of data. Audience: End business user can build such solutions. Related technologies: Excel, and Excel Services Excel Services: Excel Services is a Microsoft SharePoint Server 2010 shared service that brings the power of Excel to SharePoint Server by providing server-side calculation and browser-based rendering of Excel workbooks. Thus, excel sheets can be created by end users, and published to SharePoint server – which are then rendered right through the browser in read-only or parameterized-read-only modes. They can also be accessed by other software via SOAP or REST based APIs. Good for: Sharing excel sheets with a larger number of people, while maintaining control/version control etc. Sharing logic embedded in excel sheets with other software across the organization via REST/SOAP interfaces Audience: End business users can build such solutions once your tech staff has setup excel services on a SharePoint server instance. Programmers can write software consuming functionality/complex formulae contained in your sheets. Related technologies: PerformancePoint Services, Excel, and PowerPivot. Visio Services: Visio Services is a shared service on the Microsoft SharePoint Server 2010 platform that allows users to share and view Visio diagrams that may or may not have data connected to them. Connected data can update these diagrams allowing a visual/graphical view into the data. The diagrams are viewable through the browser. They are rendered in silverlight, but will automatically down-convert to .png formats. Good for: Showing data as diagrams, live updating. Comes with a developer story. Audience: End business users can build such solutions once your tech staff has setup visio services on a SharePoint server instance. Developers can enhance the visualizations Related Technologies: Visio Services can be used to render workflow visualizations in SP2010 Reporting Services: SQL Server reporting services can integrate with SharePoint, allowing you to store reports and data sources in SharePoint document libraries, and render these reports and associated functionality such as subscriptions through a SharePoint site. In SharePoint 2010, you can also write reports against SharePoint lists (access services uses this technique). Good for: Showing complex reports running in a industry standard data store, such as SQL server. Audience: This is definitely developer land. Don’t expect end users to craft up reports, unless a report model has previously been published. Related Technologies: PerformancePoint Services PerformancePoint Services: PerformancePoint Services in SharePoint 2010 is now fully integrated with SharePoint, and comes with features that can either be used in the BI center site definition, or on their own as activated features in existing site collections. PerformancePoint services allows you to build reports and dashboards that target a variety of back-end datasources including: SQL Server reporting services, SQL Server analysis services, SharePoint lists, excel services, simple tables, etc. Using these you have the ability to create dashboards, scorecards/kpis, and simple reports. You can also create reports targeting hierarchical multidimensional data sources. The visual decomposition tree is a new report type that lets you quickly breakdown multi-dimensional data. Good for: Mostly everything :), except your wallet – it’s not free! But this is the most comprehensive offering. If you have SharePoint server, forget everything and go with performance point. Audience: Developers need to setup the back-end sources, manageability story. DBAs need to setup datawarehouses with cubes. Moderately sophisticated business users, or developers can craft up reports using dashboard designer which is a click-once App that deploys with PerformancePoint Related Technologies: Excel services, reporting services, etc.   Other relevant technologies to know about: Business Connectivity Services: Allows for consumption of external data in SharePoint as columns or external lists. This can be paired with one or more of the above BI offerings allowing insight into such data. Access Services: Allows the representation/publishing of an access database as a SharePoint 2010 site, leveraging many SharePoint features. Reporting services is used by Access services. Secure Store Service: The SP2010 Secure store service is a replacement for the SP2007 single sign on feature. This acts as a credential policeman providing credentials to various applications running with SharePoint. BCS, PerformancePoint Services, Excel Services, and many other apps use the SSS (Secure Store Service) for credential control. Comment on the article ....

    Read the article

  • Monitoring Events in your BPEL Runtime - RSS Feeds?

    - by Ramkumar Menon
    @10g - It had been a while since I'd tried something different. so here's what I did this week!Whenever our Developers deployed processes to the BPEL runtime, or perhaps when a process gets turned off due to connectivity issues, or maybe someone retired a process, I needed to know. So here's what I did. Step 1: Downloaded Quartz libraries and went through the documentation to understand what it takes to schedule a recurring job. Step 2: Cranked out two components using Oracle JDeveloper. [Within a new Web Project] a) A simple Java Class named FeedUpdater that extends org.quartz.Job. All this class does is to connect to your BPEL Runtime [via opmn:ormi] and fetch all events that occured in the last "n" minutes. events? - If it doesn't ring a bell - its right there on the BPEL Console. If you click on "Administration > Process Log" - what you see are events.The API to retrieve the events is //get the locator reference for the domain you are interested in.Locator l = .... //Predicate to retrieve events for last "n" minutesWhereCondition wc = new WhereCondition(...) //get all those events you needed.BPELProcessEvent[] events = l.listProcessEvents(wc); After you get all these events, write out these into an RSS Feed XML structure and stream it into a file that resides either in your Apache htdocs, or wherever it can be accessed via HTTP.You can read all about RSS 2.0 here. At a high level, here is how it looks like. <?xml version = '1.0' encoding = 'UTF-8'?><rss version="2.0">  <channel>    <title>Live Updates from the Development Environment</title>    <link>http://soadev.myserver.com/feeds/</link>    <description>Live Updates from the Development Environment</description>    <lastBuildDate>Fri, 19 Nov 2010 01:03:00 PST</lastBuildDate>    <language>en-us</language>    <ttl>1</ttl>    <item>      <guid>1290213724692</guid>      <title>Process compiled</title>      <link>http://soadev.myserver.com/BPELConsole/mdm_product/administration.jsp?mode=processLog&amp;processName=&amp;dn=all&amp;eventType=all&amp;eventDate=600&amp;Filter=++Filter++</link>      <pubDate>Fri Nov 19 00:00:37 PST 2010</pubDate>      <description>SendPurchaseOrderRequestService: 3.0 Time : Fri Nov 19 00:00:37                   PST 2010</description>    </item>   ...... </channel> </rss> For writing ut XML content, read through Oracle XML Parser APIs - [search around for oracle.xml.parser.v2] b) Now that my "Job" was done, my job was half done. Next, I wrote up a simple Scheduler Servlet that schedules the above "Job" class to be executed ever "n" minutes. It is very straight forward. Here is the primary section of the code.           try {        Scheduler sched = StdSchedulerFactory.getDefaultScheduler();         //get n and make a trigger that executes every "n" seconds        Trigger trigger = TriggerUtils.makeSecondlyTrigger(n);        trigger.setName("feedTrigger" + System.currentTimeMillis());        trigger.setGroup("feedGroup");                JobDetail job = new JobDetail("SOA_Feed" + System.currentTimeMillis(), "feedGroup", FeedUpdater.class);        sched.scheduleJob(job,trigger);         }catch(Exception ex) {            ex.printStackTrace();            throw new ServletException(ex.getMessage());        } Look up the Quartz API and documentation. It will make this look much simpler.   Now that both components were ready, I packaged the Application into a war file and deployed it onto my Application Server. When the servlet initialized, the "n" second schedule was set/initialized. From then on, the servlet kept populating the RSS Feed file. I just ensured that my "Job" code keeps only 30 latest events within it, so that the feed file is small and under control. [a few kbs]   Next I opened up the feed xml on my browser - It requested a subscription - and Here I was - watching new deployments/life cycle events all popping up on my browser toolbar every 5 (actually n)  minutes!   Well, you could do it on a browser/reader of your choice - or perhaps read them like you read an email on your thunderbird!.      

    Read the article

  • Node.js Adventure - Storage Services and Service Runtime

    - by Shaun
    When I described on how to host a Node.js application on Windows Azure, one of questions might be raised about how to consume the vary Windows Azure services, such as the storage, service bus, access control, etc.. Interact with windows azure services is available in Node.js through the Windows Azure Node.js SDK, which is a module available in NPM. In this post I would like to describe on how to use Windows Azure Storage (a.k.a. WAS) as well as the service runtime.   Consume Windows Azure Storage Let’s firstly have a look on how to consume WAS through Node.js. As we know in the previous post we can host Node.js application on Windows Azure Web Site (a.k.a. WAWS) as well as Windows Azure Cloud Service (a.k.a. WACS). In theory, WAWS is also built on top of WACS worker roles with some more features. Hence in this post I will only demonstrate for hosting in WACS worker role. The Node.js code can be used when consuming WAS when hosted on WAWS. But since there’s no roles in WAWS, the code for consuming service runtime mentioned in the next section cannot be used for WAWS node application. We can use the solution that I created in my last post. Alternatively we can create a new windows azure project in Visual Studio with a worker role, add the “node.exe” and “index.js” and install “express” and “node-sqlserver” modules, make all files as “Copy always”. In order to use windows azure services we need to have Windows Azure Node.js SDK, as knows as a module named “azure” which can be installed through NPM. Once we downloaded and installed, we need to include them in our worker role project and make them as “Copy always”. You can use my “Copy all always” tool mentioned in my last post to update the currently worker role project file. You can also find the source code of this tool here. The source code of Windows Azure SDK for Node.js can be found in its GitHub page. It contains two parts. One is a CLI tool which provides a cross platform command line package for Mac and Linux to manage WAWS and Windows Azure Virtual Machines (a.k.a. WAVM). The other is a library for managing and consuming vary windows azure services includes tables, blobs, queues, service bus and the service runtime. I will not cover all of them but will only demonstrate on how to use tables and service runtime information in this post. You can find the full document of this SDK here. Back to Visual Studio and open the “index.js”, let’s continue our application from the last post, which was working against Windows Azure SQL Database (a.k.a. WASD). The code should looks like this. 1: var express = require("express"); 2: var sql = require("node-sqlserver"); 3:  4: var connectionString = "Driver={SQL Server Native Client 10.0};Server=tcp:ac6271ya9e.database.windows.net,1433;Database=synctile;Uid=shaunxu@ac6271ya9e;Pwd={PASSWORD};Encrypt=yes;Connection Timeout=30;"; 5: var port = 80; 6:  7: var app = express(); 8:  9: app.configure(function () { 10: app.use(express.bodyParser()); 11: }); 12:  13: app.get("/", function (req, res) { 14: sql.open(connectionString, function (err, conn) { 15: if (err) { 16: console.log(err); 17: res.send(500, "Cannot open connection."); 18: } 19: else { 20: conn.queryRaw("SELECT * FROM [Resource]", function (err, results) { 21: if (err) { 22: console.log(err); 23: res.send(500, "Cannot retrieve records."); 24: } 25: else { 26: res.json(results); 27: } 28: }); 29: } 30: }); 31: }); 32:  33: app.get("/text/:key/:culture", function (req, res) { 34: sql.open(connectionString, function (err, conn) { 35: if (err) { 36: console.log(err); 37: res.send(500, "Cannot open connection."); 38: } 39: else { 40: var key = req.params.key; 41: var culture = req.params.culture; 42: var command = "SELECT * FROM [Resource] WHERE [Key] = '" + key + "' AND [Culture] = '" + culture + "'"; 43: conn.queryRaw(command, function (err, results) { 44: if (err) { 45: console.log(err); 46: res.send(500, "Cannot retrieve records."); 47: } 48: else { 49: res.json(results); 50: } 51: }); 52: } 53: }); 54: }); 55:  56: app.get("/sproc/:key/:culture", function (req, res) { 57: sql.open(connectionString, function (err, conn) { 58: if (err) { 59: console.log(err); 60: res.send(500, "Cannot open connection."); 61: } 62: else { 63: var key = req.params.key; 64: var culture = req.params.culture; 65: var command = "EXEC GetItem '" + key + "', '" + culture + "'"; 66: conn.queryRaw(command, function (err, results) { 67: if (err) { 68: console.log(err); 69: res.send(500, "Cannot retrieve records."); 70: } 71: else { 72: res.json(results); 73: } 74: }); 75: } 76: }); 77: }); 78:  79: app.post("/new", function (req, res) { 80: var key = req.body.key; 81: var culture = req.body.culture; 82: var val = req.body.val; 83:  84: sql.open(connectionString, function (err, conn) { 85: if (err) { 86: console.log(err); 87: res.send(500, "Cannot open connection."); 88: } 89: else { 90: var command = "INSERT INTO [Resource] VALUES ('" + key + "', '" + culture + "', N'" + val + "')"; 91: conn.queryRaw(command, function (err, results) { 92: if (err) { 93: console.log(err); 94: res.send(500, "Cannot retrieve records."); 95: } 96: else { 97: res.send(200, "Inserted Successful"); 98: } 99: }); 100: } 101: }); 102: }); 103:  104: app.listen(port); Now let’s create a new function, copy the records from WASD to table service. 1. Delete the table named “resource”. 2. Create a new table named “resource”. These 2 steps ensures that we have an empty table. 3. Load all records from the “resource” table in WASD. 4. For each records loaded from WASD, insert them into the table one by one. 5. Prompt to user when finished. In order to use table service we need the storage account and key, which can be found from the developer portal. Just select the storage account and click the Manage Keys button. Then create two local variants in our Node.js application for the storage account name and key. Since we need to use WAS we need to import the azure module. Also I created another variant stored the table name. In order to work with table service I need to create the storage client for table service. This is very similar as the Windows Azure SDK for .NET. As the code below I created a new variant named “client” and use “createTableService”, specified my storage account name and key. 1: var azure = require("azure"); 2: var storageAccountName = "synctile"; 3: var storageAccountKey = "/cOy9L7xysXOgPYU9FjDvjrRAhaMX/5tnOpcjqloPNDJYucbgTy7MOrAW7CbUg6PjaDdmyl+6pkwUnKETsPVNw=="; 4: var tableName = "resource"; 5: var client = azure.createTableService(storageAccountName, storageAccountKey); Now create a new function for URL “/was/init” so that we can trigger it through browser. Then in this function we will firstly load all records from WASD. 1: app.get("/was/init", function (req, res) { 2: // load all records from windows azure sql database 3: sql.open(connectionString, function (err, conn) { 4: if (err) { 5: console.log(err); 6: res.send(500, "Cannot open connection."); 7: } 8: else { 9: conn.queryRaw("SELECT * FROM [Resource]", function (err, results) { 10: if (err) { 11: console.log(err); 12: res.send(500, "Cannot retrieve records."); 13: } 14: else { 15: if (results.rows.length > 0) { 16: // begin to transform the records into table service 17: } 18: } 19: }); 20: } 21: }); 22: }); When we succeed loaded all records we can start to transform them into table service. First I need to recreate the table in table service. This can be done by deleting and creating the table through table client I had just created previously. 1: app.get("/was/init", function (req, res) { 2: // load all records from windows azure sql database 3: sql.open(connectionString, function (err, conn) { 4: if (err) { 5: console.log(err); 6: res.send(500, "Cannot open connection."); 7: } 8: else { 9: conn.queryRaw("SELECT * FROM [Resource]", function (err, results) { 10: if (err) { 11: console.log(err); 12: res.send(500, "Cannot retrieve records."); 13: } 14: else { 15: if (results.rows.length > 0) { 16: // begin to transform the records into table service 17: // recreate the table named 'resource' 18: client.deleteTable(tableName, function (error) { 19: client.createTableIfNotExists(tableName, function (error) { 20: if (error) { 21: error["target"] = "createTableIfNotExists"; 22: res.send(500, error); 23: } 24: else { 25: // transform the records 26: } 27: }); 28: }); 29: } 30: } 31: }); 32: } 33: }); 34: }); As you can see, the azure SDK provide its methods in callback pattern. In fact, almost all modules in Node.js use the callback pattern. For example, when I deleted a table I invoked “deleteTable” method, provided the name of the table and a callback function which will be performed when the table had been deleted or failed. Underlying, the azure module will perform the table deletion operation in POSIX async threads pool asynchronously. And once it’s done the callback function will be performed. This is the reason we need to nest the table creation code inside the deletion function. If we perform the table creation code after the deletion code then they will be invoked in parallel. Next, for each records in WASD I created an entity and then insert into the table service. Finally I send the response to the browser. Can you find a bug in the code below? I will describe it later in this post. 1: app.get("/was/init", function (req, res) { 2: // load all records from windows azure sql database 3: sql.open(connectionString, function (err, conn) { 4: if (err) { 5: console.log(err); 6: res.send(500, "Cannot open connection."); 7: } 8: else { 9: conn.queryRaw("SELECT * FROM [Resource]", function (err, results) { 10: if (err) { 11: console.log(err); 12: res.send(500, "Cannot retrieve records."); 13: } 14: else { 15: if (results.rows.length > 0) { 16: // begin to transform the records into table service 17: // recreate the table named 'resource' 18: client.deleteTable(tableName, function (error) { 19: client.createTableIfNotExists(tableName, function (error) { 20: if (error) { 21: error["target"] = "createTableIfNotExists"; 22: res.send(500, error); 23: } 24: else { 25: // transform the records 26: for (var i = 0; i < results.rows.length; i++) { 27: var entity = { 28: "PartitionKey": results.rows[i][1], 29: "RowKey": results.rows[i][0], 30: "Value": results.rows[i][2] 31: }; 32: client.insertEntity(tableName, entity, function (error) { 33: if (error) { 34: error["target"] = "insertEntity"; 35: res.send(500, error); 36: } 37: else { 38: console.log("entity inserted"); 39: } 40: }); 41: } 42: // send the 43: console.log("all done"); 44: res.send(200, "All done!"); 45: } 46: }); 47: }); 48: } 49: } 50: }); 51: } 52: }); 53: }); Now we can publish it to the cloud and have a try. But normally we’d better test it at the local emulator first. In Node.js SDK there are three build-in properties which provides the account name, key and host address for local storage emulator. We can use them to initialize our table service client. We also need to change the SQL connection string to let it use my local database. The code will be changed as below. 1: // windows azure sql database 2: //var connectionString = "Driver={SQL Server Native Client 10.0};Server=tcp:ac6271ya9e.database.windows.net,1433;Database=synctile;Uid=shaunxu@ac6271ya9e;Pwd=eszqu94XZY;Encrypt=yes;Connection Timeout=30;"; 3: // sql server 4: var connectionString = "Driver={SQL Server Native Client 11.0};Server={.};Database={Caspar};Trusted_Connection={Yes};"; 5:  6: var azure = require("azure"); 7: var storageAccountName = "synctile"; 8: var storageAccountKey = "/cOy9L7xysXOgPYU9FjDvjrRAhaMX/5tnOpcjqloPNDJYucbgTy7MOrAW7CbUg6PjaDdmyl+6pkwUnKETsPVNw=="; 9: var tableName = "resource"; 10: // windows azure storage 11: //var client = azure.createTableService(storageAccountName, storageAccountKey); 12: // local storage emulator 13: var client = azure.createTableService(azure.ServiceClient.DEVSTORE_STORAGE_ACCOUNT, azure.ServiceClient.DEVSTORE_STORAGE_ACCESS_KEY, azure.ServiceClient.DEVSTORE_TABLE_HOST); Now let’s run the application and navigate to “localhost:12345/was/init” as I hosted it on port 12345. We can find it transformed the data from my local database to local table service. Everything looks fine. But there is a bug in my code. If we have a look on the Node.js command window we will find that it sent response before all records had been inserted, which is not what I expected. The reason is that, as I mentioned before, Node.js perform all IO operations in non-blocking model. When we inserted the records we executed the table service insert method in parallel, and the operation of sending response was also executed in parallel, even though I wrote it at the end of my logic. The correct logic should be, when all entities had been copied to table service with no error, then I will send response to the browser, otherwise I should send error message to the browser. To do so I need to import another module named “async”, which helps us to coordinate our asynchronous code. Install the module and import it at the beginning of the code. Then we can use its “forEach” method for the asynchronous code of inserting table entities. The first argument of “forEach” is the array that will be performed. The second argument is the operation for each items in the array. And the third argument will be invoked then all items had been performed or any errors occurred. Here we can send our response to browser. 1: app.get("/was/init", function (req, res) { 2: // load all records from windows azure sql database 3: sql.open(connectionString, function (err, conn) { 4: if (err) { 5: console.log(err); 6: res.send(500, "Cannot open connection."); 7: } 8: else { 9: conn.queryRaw("SELECT * FROM [Resource]", function (err, results) { 10: if (err) { 11: console.log(err); 12: res.send(500, "Cannot retrieve records."); 13: } 14: else { 15: if (results.rows.length > 0) { 16: // begin to transform the records into table service 17: // recreate the table named 'resource' 18: client.deleteTable(tableName, function (error) { 19: client.createTableIfNotExists(tableName, function (error) { 20: if (error) { 21: error["target"] = "createTableIfNotExists"; 22: res.send(500, error); 23: } 24: else { 25: async.forEach(results.rows, 26: // transform the records 27: function (row, callback) { 28: var entity = { 29: "PartitionKey": row[1], 30: "RowKey": row[0], 31: "Value": row[2] 32: }; 33: client.insertEntity(tableName, entity, function (error) { 34: if (error) { 35: callback(error); 36: } 37: else { 38: console.log("entity inserted."); 39: callback(null); 40: } 41: }); 42: }, 43: // send reponse 44: function (error) { 45: if (error) { 46: error["target"] = "insertEntity"; 47: res.send(500, error); 48: } 49: else { 50: console.log("all done"); 51: res.send(200, "All done!"); 52: } 53: } 54: ); 55: } 56: }); 57: }); 58: } 59: } 60: }); 61: } 62: }); 63: }); Run it locally and now we can find the response was sent after all entities had been inserted. Query entities against table service is simple as well. Just use the “queryEntity” method from the table service client and providing the partition key and row key. We can also provide a complex query criteria as well, for example the code here. In the code below I queried an entity by the partition key and row key, and return the proper localization value in response. 1: app.get("/was/:key/:culture", function (req, res) { 2: var key = req.params.key; 3: var culture = req.params.culture; 4: client.queryEntity(tableName, culture, key, function (error, entity) { 5: if (error) { 6: res.send(500, error); 7: } 8: else { 9: res.json(entity); 10: } 11: }); 12: }); And then tested it on local emulator. Finally if we want to publish this application to the cloud we should change the database connection string and storage account. For more information about how to consume blob and queue service, as well as the service bus please refer to the MSDN page.   Consume Service Runtime As I mentioned above, before we published our application to the cloud we need to change the connection string and account information in our code. But if you had played with WACS you should have known that the service runtime provides the ability to retrieve configuration settings, endpoints and local resource information at runtime. Which means we can have these values defined in CSCFG and CSDEF files and then the runtime should be able to retrieve the proper values. For example we can add some role settings though the property window of the role, specify the connection string and storage account for cloud and local. And the can also use the endpoint which defined in role environment to our Node.js application. In Node.js SDK we can get an object from “azure.RoleEnvironment”, which provides the functionalities to retrieve the configuration settings and endpoints, etc.. In the code below I defined the connection string variants and then use the SDK to retrieve and initialize the table client. 1: var connectionString = ""; 2: var storageAccountName = ""; 3: var storageAccountKey = ""; 4: var tableName = ""; 5: var client; 6:  7: azure.RoleEnvironment.getConfigurationSettings(function (error, settings) { 8: if (error) { 9: console.log("ERROR: getConfigurationSettings"); 10: console.log(JSON.stringify(error)); 11: } 12: else { 13: console.log(JSON.stringify(settings)); 14: connectionString = settings["SqlConnectionString"]; 15: storageAccountName = settings["StorageAccountName"]; 16: storageAccountKey = settings["StorageAccountKey"]; 17: tableName = settings["TableName"]; 18:  19: console.log("connectionString = %s", connectionString); 20: console.log("storageAccountName = %s", storageAccountName); 21: console.log("storageAccountKey = %s", storageAccountKey); 22: console.log("tableName = %s", tableName); 23:  24: client = azure.createTableService(storageAccountName, storageAccountKey); 25: } 26: }); In this way we don’t need to amend the code for the configurations between local and cloud environment since the service runtime will take care of it. At the end of the code we will listen the application on the port retrieved from SDK as well. 1: azure.RoleEnvironment.getCurrentRoleInstance(function (error, instance) { 2: if (error) { 3: console.log("ERROR: getCurrentRoleInstance"); 4: console.log(JSON.stringify(error)); 5: } 6: else { 7: console.log(JSON.stringify(instance)); 8: if (instance["endpoints"] && instance["endpoints"]["nodejs"]) { 9: var endpoint = instance["endpoints"]["nodejs"]; 10: app.listen(endpoint["port"]); 11: } 12: else { 13: app.listen(8080); 14: } 15: } 16: }); But if we tested the application right now we will find that it cannot retrieve any values from service runtime. This is because by default, the entry point of this role was defined to the worker role class. In windows azure environment the service runtime will open a named pipeline to the entry point instance, so that it can connect to the runtime and retrieve values. But in this case, since the entry point was worker role and the Node.js was opened inside the role, the named pipeline was established between our worker role class and service runtime, so our Node.js application cannot use it. To fix this problem we need to open the CSDEF file under the azure project, add a new element named Runtime. Then add an element named EntryPoint which specify the Node.js command line. So that the Node.js application will have the connection to service runtime, then it’s able to read the configurations. Start the Node.js at local emulator we can find it retrieved the connections, storage account for local. And if we publish our application to azure then it works with WASD and storage service through the configurations for cloud.   Summary In this post I demonstrated how to use Windows Azure SDK for Node.js to interact with storage service, especially the table service. I also demonstrated on how to use WACS service runtime, how to retrieve the configuration settings and the endpoint information. And in order to make the service runtime available to my Node.js application I need to create an entry point element in CSDEF file and set “node.exe” as the entry point. I used five posts to introduce and demonstrate on how to run a Node.js application on Windows platform, how to use Windows Azure Web Site and Windows Azure Cloud Service worker role to host our Node.js application. I also described how to work with other services provided by Windows Azure platform through Windows Azure SDK for Node.js. Node.js is a very new and young network application platform. But since it’s very simple and easy to learn and deploy, as well as, it utilizes single thread non-blocking IO model, Node.js became more and more popular on web application and web service development especially for those IO sensitive projects. And as Node.js is very good at scaling-out, it’s more useful on cloud computing platform. Use Node.js on Windows platform is new, too. The modules for SQL database and Windows Azure SDK are still under development and enhancement. It doesn’t support SQL parameter in “node-sqlserver”. It does support using storage connection string to create the storage client in “azure”. But Microsoft is working on make them easier to use, working on add more features and functionalities.   PS, you can download the source code here. You can download the source code of my “Copy all always” tool here.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • Populate a WCF syndication podcast using MP3 ID3 metadata tags

    - by brian_ritchie
    In the last post, I showed how to create a podcast using WCF syndication.  A podcast is an RSS feed containing a list of audio files to which users can subscribe.  The podcast not only contains links to the audio files, but also metadata about each episode.  A cool approach to building the feed is reading this metadata from the ID3 tags on the MP3 files used for the podcast. One library to do this is TagLib-Sharp.  Here is some sample code: .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: Consolas, "Courier New", Courier, Monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } 1: var taggedFile = TagLib.File.Create(f); 2: var fileInfo = new FileInfo(f); 3: var item = new iTunesPodcastItem() 4: { 5: title = taggedFile.Tag.Title, 6: size = fileInfo.Length, 7: url = feed.baseUrl + fileInfo.Name, 8: duration = taggedFile.Properties.Duration, 9: mediaType = feed.mediaType, 10: summary = taggedFile.Tag.Comment, 11: subTitle = taggedFile.Tag.FirstAlbumArtist, 12: id = fileInfo.Name 13: }; 14: if (!string.IsNullOrEmpty(taggedFile.Tag.Album)) 15: item.publishedDate = DateTimeOffset.Parse(taggedFile.Tag.Album); This reads the ID3 tags into an object for later use in creating the syndication feed.  When the MP3 is created, these tags are set...or they can be set after the fact using the Properties dialog in Windows Explorer.  The only "hack" is that there isn't an easily accessible tag for "subtitle" or "published date" so I used other tags in this example. Feel free to change this to meet your purposes.  You could remove the subtitle & use the file modified data for example. That takes care of the episodes, for the feed level settings we'll load those from an XML file: .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: Consolas, "Courier New", Courier, Monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } 1: <?xml version="1.0" encoding="utf-8" ?> 2: <iTunesPodcastFeed 3: baseUrl ="" 4: title="" 5: subTitle="" 6: description="" 7: copyright="" 8: category="" 9: ownerName="" 10: ownerEmail="" 11: mediaType="audio/mp3" 12: mediaFiles="*.mp3" 13: imageUrl="" 14: link="" 15: /> Here is the full code put together. Read the feed XML file and deserialize it into an iTunesPodcastFeed classLoop over the files in a directory reading the ID3 tags from the audio files .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: Consolas, "Courier New", Courier, Monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } 1: public static iTunesPodcastFeed CreateFeedFromFiles(string podcastDirectory, string podcastFeedFile) 2: { 3: XmlSerializer serializer = new XmlSerializer(typeof(iTunesPodcastFeed)); 4: iTunesPodcastFeed feed; 5: using (var fs = File.OpenRead(Path.Combine(podcastDirectory, podcastFeedFile))) 6: { 7: feed = (iTunesPodcastFeed)serializer.Deserialize(fs); 8: } 9: foreach (var f in Directory.GetFiles(podcastDirectory, feed.mediaFiles)) 10: { 11: try 12: { 13: var taggedFile = TagLib.File.Create(f); 14: var fileInfo = new FileInfo(f); 15: var item = new iTunesPodcastItem() 16: { 17: title = taggedFile.Tag.Title, 18: size = fileInfo.Length, 19: url = feed.baseUrl + fileInfo.Name, 20: duration = taggedFile.Properties.Duration, 21: mediaType = feed.mediaType, 22: summary = taggedFile.Tag.Comment, 23: subTitle = taggedFile.Tag.FirstAlbumArtist, 24: id = fileInfo.Name 25: }; 26: if (!string.IsNullOrEmpty(taggedFile.Tag.Album)) 27: item.publishedDate = DateTimeOffset.Parse(taggedFile.Tag.Album); 28: feed.Items.Add(item); 29: } 30: catch 31: { 32: // ignore files that can't be accessed successfully 33: } 34: } 35: return feed; 36: } Usually putting a "try...catch" like this is bad, but in this case I'm just skipping over files that are locked while they are being uploaded to the web site.Here is the code from the last couple of posts.  

    Read the article

  • Why do my 512x512 bitmaps look jaggy on Android OpenGL?

    - by Milo Mordaunt
    This is sort of driving me nuts, I've googled and googled and tried everything I can think of, but my sprites still look super blurry and super jaggy. Example: Here: https://docs.google.com/open?id=0Bx9Gbwnv9Hd2TmpiZkFycUNmRTA If you click through to the actual full size image you should see what I mean, it's like it's taking and average of every 5*5 pixels or something, the background looks really blurry and blocky, but the ball is the worst. The clouds look all right for some reason, probably because they're mostly transparent. I know the pngs aren't top notch themselves but hey, I'm no artist! I would imagine it's a problem with either: a. How the pngs are made example sprite (512x512): https://docs.google.com/open?id=0Bx9Gbwnv9Hd2a2RRQlJiQTFJUEE b. How my Matrices work This is the relevant parts of the renderer: public void onDrawFrame(GL10 unused) { if(world != null) { dt = System.currentTimeMillis() - endTime; world.update( (float) dt); // Redraw background color GLES20.glClear(GLES20.GL_COLOR_BUFFER_BIT); Matrix.setIdentityM(mvMatrix, 0); Matrix.translateM(mvMatrix, 0, 0f, 0f, 0f); world.draw(mvMatrix, mProjMatrix); endTime = System.currentTimeMillis(); } else { Log.d(TAG, "There is no world...."); } } public void onSurfaceChanged(GL10 unused, int width, int height) { GLES20.glViewport(0, 0, width, height); Matrix.orthoM(mProjMatrix, 0, 0, width /2, 0, height /2, -1.f, 1.f); } And this is what each Quad does when draw is called: public void draw(float[] mvMatrix, float[] pMatrix) { Matrix.setIdentityM(mMatrix, 0); Matrix.setIdentityM(mvMatrix, 0); Matrix.translateM(mMatrix, 0, xPos, yPos, 0.f); Matrix.multiplyMM(mvMatrix, 0, mvMatrix, 0, mMatrix, 0); Matrix.scaleM(mvMatrix, 0, scale, scale, 0f); Matrix.rotateM(mvMatrix, 0, angle, 0f, 0f, -1f); GLES20.glUseProgram(mProgram); posAttr = GLES20.glGetAttribLocation(mProgram, "vPosition"); texAttr = GLES20.glGetAttribLocation(mProgram, "aTexCo"); uSampler = GLES20.glGetUniformLocation(mProgram, "uSampler"); int alphaHandle = GLES20.glGetUniformLocation(mProgram, "alpha"); GLES20.glVertexAttribPointer(posAttr, COORDS_PER_VERTEX, GLES20.GL_FLOAT, false, 0, vertexBuffer); GLES20.glVertexAttribPointer(texAttr, 2, GLES20.GL_FLOAT, false, 0, texCoBuffer); GLES20.glEnableVertexAttribArray(posAttr); GLES20.glEnableVertexAttribArray(texAttr); GLES20.glActiveTexture(GLES20.GL_TEXTURE0); GLES20.glBindTexture(GLES20.GL_TEXTURE_2D, texture); GLES20.glUniform1i(uSampler, 0); GLES20.glUniform1f(alphaHandle, alpha); mMVMatrixHandle = GLES20.glGetUniformLocation(mProgram, "uMVMatrix"); mPMatrixHandle = GLES20.glGetUniformLocation(mProgram, "uPMatrix"); GLES20.glUniformMatrix4fv(mMVMatrixHandle, 1, false, mvMatrix, 0); GLES20.glUniformMatrix4fv(mPMatrixHandle, 1, false, pMatrix, 0); GLES20.glDrawElements(GLES20.GL_TRIANGLE_STRIP, 4, GLES20.GL_UNSIGNED_SHORT, indicesBuffer); GLES20.glDisableVertexAttribArray(posAttr); GLES20.glDisableVertexAttribArray(texAttr); GLES20.glBindTexture(GLES20.GL_TEXTURE_2D, 0); } c. How my texture loading/blending/shaders setup works Here is the renderer setup: public void onSurfaceCreated(GL10 unused, EGLConfig config) { // Set the background frame color GLES20.glClearColor(0.0f, 0.0f, 0.0f, 1.0f); GLES20.glDisable(GLES20.GL_DEPTH_TEST); GLES20.glDepthMask(false); GLES20.glBlendFunc(GLES20.GL_ONE, GLES20.GL_ONE_MINUS_SRC_ALPHA); GLES20.glEnable(GLES20.GL_BLEND); GLES20.glEnable(GLES20.GL_DITHER); } Here is the vertex shader: attribute vec4 vPosition; attribute vec2 aTexCo; varying vec2 vTexCo; uniform mat4 uMVMatrix; uniform mat4 uPMatrix; void main() { gl_Position = uPMatrix * uMVMatrix * vPosition; vTexCo = aTexCo; } And here's the fragment shader: precision mediump float; uniform sampler2D uSampler; uniform vec4 vColor; varying vec2 vTexCo; varying float alpha; void main() { vec4 color = texture2D(uSampler, vec2(vTexCo)); gl_FragColor = color; if(gl_FragColor.a == 0.0) { "discard; } } This is how textures are loaded: private int loadTexture(int rescource) { int[] texture = new int[1]; BitmapFactory.Options opts = new BitmapFactory.Options(); opts.inScaled = false; Bitmap temp = BitmapFactory.decodeResource(context.getResources(), rescource, opts); GLES20.glGenTextures(1, texture, 0); GLES20.glActiveTexture(GLES20.GL_TEXTURE0); GLES20.glBindTexture(GLES20.GL_TEXTURE_2D, texture[0]); GLES20.glTexParameterf(GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_MAG_FILTER, GLES20.GL_LINEAR); GLES20.glTexParameterf(GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_MIN_FILTER, GLES20.GL_LINEAR); GLUtils.texImage2D(GLES20.GL_TEXTURE_2D, 0, temp, 0); GLES20.glGenerateMipmap(GLES20.GL_TEXTURE_2D); GLES20.glBindTexture(GLES20.GL_TEXTURE_2D, 0); temp.recycle(); return texture[0]; } I'm sure I'm doing about 20,000 things wrong, so I'm really sorry if the problem is blindingly obvious... The test device is a Galaxy Note, running a JellyBean custom ROM, if that matters at all. So the screen resolution is 1280x800, which means... The background is 1024x1024, so yeah it might be a little blurry, but shouldn't be made of lego. Thank you so much, any answer at all would be appreciated.

    Read the article

  • Implementing synchronous MediaTypeFormatters in ASP.NET Web API

    - by cibrax
    One of main characteristics of MediaTypeFormatter’s in ASP.NET Web API is that they leverage the Task Parallel Library (TPL) for reading or writing an model into an stream. When you derive your class from the base class MediaTypeFormatter, you have to either implement the WriteToStreamAsync or ReadFromStreamAsync methods for writing or reading a model from a stream respectively. These two methods return a Task, which internally does all the serialization work, as it is illustrated bellow. public abstract class MediaTypeFormatter { public virtual Task WriteToStreamAsync(Type type, object value, Stream writeStream, HttpContent content, TransportContext transportContext); public virtual Task<object> ReadFromStreamAsync(Type type, Stream readStream, HttpContent content, IFormatterLogger formatterLogger); }   .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } However, most of the times, serialization is a safe operation that can be done synchronously. In fact, many of the serializer classes you will find in the .NET framework only provide sync methods. So the question is, how you can transform that synchronous work into a Task ?. Creating a new task using the method Task.Factory.StartNew for doing all the serialization work would be probably the typical answer. That would work, as a new task is going to be scheduled. However, that might involve some unnecessary context switches, which are out of our control and might be affect performance on server code specially.   If you take a look at the source code of the MediaTypeFormatters shipped as part of the framework, you will notice that they actually using another pattern, which uses a TaskCompletionSource class. public Task WriteToStreamAsync(Type type, object value, Stream writeStream, HttpContent content, TransportContext transportContext) {   var tsc = new TaskCompletionSource<AsyncVoid>(); tsc.SetResult(default(AsyncVoid));   //Do all the serialization work here synchronously   return tsc.Task; }   /// <summary> /// Used as the T in a "conversion" of a Task into a Task{T} /// </summary> private struct AsyncVoid { } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } They are basically doing all the serialization work synchronously and using a TaskCompletionSource for returning a task already done. To conclude this post, this is another approach you might want to consider when using serializers that are not compatible with an async model. Update: Henrik Nielsen from the ASP.NET team pointed out the existence of a built-in media type formatter for writing sync formatters. BufferedMediaTypeFormatter http://t.co/FxOfeI5x

    Read the article

  • Access Services in SharePoint Server 2010

    - by Wayne
    Another SharePoint Server 2010 feature which cannot go unnoticed is the Access Services. Access Services is a service in SharePoint Server 2010 that allows administrators to view, edit, and configure a Microsoft access application within a Web Browser. Access Services settings support backup and recovery, regardless of whether there is a UI setting in Central Administration. However, backup and recovery only apply to service-level and administrative-level settings; end-user content from the Access application is not backed up as part of this process. Access Services has Windows PowerShell functionality that can be used to provide the service that uses settings from a previous backup; configure and manage macro and query setting; manage and configure session management; and configure all the global settings of the service. Key Benefits of SharePoint Server Access Services Easier Access to right tools: The enhanced, customizable Ribbon in Access 2010 makes it easy to uncover more commands so you can focus on the end product. The new Microsoft Office BackstageTM view is yet another feature that can help you easily analyze and document your database, share, publish, and customize your Access 2010 experience, all from one convenient location. Helps build database effortlessly and quickly: Out-of-the box templates and reusable components make Access Services the fastest, simplest database solution available. It helps find new pre-built templates which you can start using without customization or select templates created by your peers in the Access online community and customize them to meet your needs. It builds your databases with new modular components. New Application Parts enable you to add a set of common Access components, such as a table and form for task management, to your database in a few simple clicks. Database navigation is now simplified. It creates Navigation Forms and makes your frequently used forms and reports more accessible without writing any code or logic. Create Impactful forms and reports: Whether it's an inventory of your assets or customer sales database, Access 2010 brings the innovative tools you'd expect from Microsoft Office. Access Services easily spot trends and add emphasis to your data. It quickly create coordinating database forms and reports and bring the Web into your database. Obtain a centralized landing pad for your data: Access 2010 offers easy ways to bring your data together and help increase work quality. New technologies help break down barriers so you can share and work together on your databases, making you or your team more efficient and productive. Add automation and complex expressions: If you need a more robust database design, such as preventing record deletion if a specific condition is met or if you need to create calculations to forecast your budget, Access 2010 empowers you to be your own developer. The enhanced Expression Builder greatly simplifies your expression building experience with IntelliSense®. With the revamped Macro Designer, it's now even easier for you to add basic logic to your database. New Data Macros allow you to attach logic to your data, centralizing the logic on the table, not the objects that update your data. Key features of Access Services 2010 - Access database content through a Web browser: Newly added Access Services on Microsoft SharePoint Server 2010 enables you to make your databases available on the Web with new Web databases. Users without an Access client can open Web forms and reports via a browser and changes are automatically synchronized. - Simplify how you access the features you need: The Ribbon, improved in Access 2010, helps you access commands even more quickly by enabling you to customize or create your own tabs. The new Microsoft Office Backstage view replaces the traditional File menu to provide one central, organized location for all of your document management tasks. - Codeless navigation: Use professional looking web-like navigation forms to make frequently used forms and reports more accessible without writing any code or logic. - Easily reuse Access items in other databases: Use Application Parts to add pre-built Access components for common tasks to your database in a few simple clicks. You can also package common database components, such as data entry forms and reports for task management, and reuse them across your organization or other databases. - Simplified formatting: By using Office themes you can create coordinating professional forms and reports across your database. Simply select a familiar and great looking Office theme, or design your own, and apply it to your database. Newly created Access objects will automatically match your chosen theme.

    Read the article

  • Grid Layouts in ADF Faces using Trinidad

    - by frank.nimphius
    Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman","serif";} ADF Faces does provide a data table component but none to define grid layouts. Grids are common in web design and developers often try HTML table markup wrapped in an f:verbatim tag or directly added the page to build a desired layout. Usually these attempts fail, showing unpredictable results, However, ADF Faces does not provide a table layout component, but Apache MyFaces Trinidad does. The Trinidad trh:tableLayout component is a thin wrapper around the HTML table element and contains a series of row layout elements, trh:rowLayout. Each trh:rowLayout component may contain one or many trh:cellLayout components to format cells content. <trh:tableLayout id="tl1" halign="left">   <trh:rowLayout id="rl1" valign="top" halign="left">     <trh:cellFormat id="cf1" width="100" header="true">        <af:outputLabel value="Label 1" id="ol1"/>     </trh:cellFormat>     <trh:cellFormat id="cf2" header="true"                               width="300">        <af:outputLabel value="Label 2" id="outputLabel1"/>        </trh:cellFormat>      </trh:rowLayout>      <trh:rowLayout id="rowLayout1" valign="top" halign="left">        <trh:cellFormat id="cellFormat1" width="100" header="false">           <af:outputLabel value="Label 3" id="outputLabel2"/>        </trh:cellFormat>     </trh:rowLayout>        ... </trh:tableLayout> To add the Trinidad tag library to your ADF Faces projects ... Open the Component Palette and right mouse click into it Choose "Edit Tag Libraries" and select the Trinidad components. Move them to the "Selected Libraries" section and Ok the dialog.The first time you drag a Trinidad component to a page, the web.xml file is updated with the required filters Note: The Trinidad tags don't participate in the ADF Faces RC geometry management. However, they are JSF components that are part of the JSF request lifecycle. Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} ADF Faces RC components work well with Trinidad layout components that don't use PPR. The PPR implementation of Trinidad is different from the one in ADF Faces. However, when you mix ADF Faces components with Trinidad components, avoid Trinidad components that have integrated PPR behavior. Only use passive Trinidad components.See:http://myfaces.apache.org/trinidad/trinidad-api/tagdoc/trh_tableLayout.htmlhttp://myfaces.apache.org/trinidad/trinidad-api/tagdoc/trh_rowLayout.htmlhttp://myfaces.apache.org/trinidad/trinidad-api/tagdoc/trh_cellFormat.html .

    Read the article

< Previous Page | 299 300 301 302 303 304 305 306 307 308 309 310  | Next Page >