Search Results

Search found 6031 results on 242 pages for 'paul white nz'.

Page 142/242 | < Previous Page | 138 139 140 141 142 143 144 145 146 147 148 149  | Next Page >

  • strftimedoesnt display year correctly

    - by paultop6
    Hi guys, i have the following code below: const char* timeformat = "%Y-%m-%d %H:%M:%S"; const int timelength = 20; char timecstring[timelength]; strftime(timecstring, timelength, timeformat, currentstruct); cout << "timecstring is: " << timecstring << "\n"; currentstruct is a tm*. The cout is giving me the date in the correct format, but the year is not 2010, but 3910. I know there is something to do with the year cound starting at 1900, but im not sure how to get strftime to recognise this and not add 1900 to the value of 2010 that is there, can anyone help. Regards Paul

    Read the article

  • Fog with Blend in OpenGL

    - by MhdAljobory
    I want to add fog in my scene which contain transparent textures made by Blend , when i enable the fog the transparent textures appear white From a distance but when i disable it the textures appear well. What is the solution to the problem of whiteness? Fog Code: GLfloat fogColor[4]= {0.5f, 0.5f, 0.5f, 1.0f}; glClearColor(0.5f,0.5f,0.5f,1.0f); glFogi(GL_FOG_MODE, GL_LINEAR); glFogfv(GL_FOG_COLOR, fogColor); glFogf(GL_FOG_DENSITY, 0.35f); glHint(GL_FOG_HINT, GL_DONT_CARE); glFogf(GL_FOG_START, 1.0f); glFogf(GL_FOG_END, 1000.0f); glEnable(GL_FOG); Screenshot

    Read the article

  • Database Insider - July 2012 issue

    - by Javier Puerta
    The July issue of the Database Insider newsletter is now available. (Full newsletter here) INFORMATION INDEPTH NEWSLETTERDatabase Insider Edition - July 2012 Optimize Your Communications Applications on Oracle ExadataThe communications industry is experiencing explosive growth, which means that high-performing, scalable, and cost-effective solutions are critical for success. Learn more by accessing white papers, success stories, product information, videos, podcasts, and more. Register now. Oracle Exadata Enables Neuralitic's SevenFlow Application ImprovementsImproves application performance 3x and compression rate/scalability 10x. ElectraCard Services Runs Oracle Exadata on Oracle LinuxAchieves processing speeds greater than 12,000 TPS and increases performance 2x. Read full newsletter here

    Read the article

  • Are there studies about the disadvantages of using issue tracking systems? [closed]

    - by user1062120
    I don't like issue tracking systems because: It takes too much time to describe issues in it. This discourage its usage. You create a place to keep your bugs. And if there is a place for them, people usually don't care too much about fixing a bug cause they can put it there so that someday someone can fix it (or not). With time, the bug lists gets so long that nobody can deal with it anymore, taking up a lot of our time. I prefer handling issues using post-its on a white board, face-to-face conversations and killing important bugs as soon as they appear. I don't care too much to keep track of bug history because I don't think that it is worth the overhead. Am I alone here? Are there studies (book/article/whatever) about the disadvantages (or great advantages) of using issue tracking systems?

    Read the article

  • Ubuntu not booting after upgrade to 12.04 from 11.10

    - by ddd
    I upgraded to 12.04 from 11.10 today. After the upgrade, I did the usual reboot of the system and then Ubuntu hangs on Checking battery state..[OK]. I can login to terminal with Ctrl-Alt-F5 and I did startx but it hangs after displaying white images of the desktop icons..there is no menu or any top bar at all. I guess this problem has something to do with Xserver, but I cannot find xorg.conf on my system. How can I get around this problem? Also is there a way I can make a backup of my files through the command line, because the GUI is not showing up at all?

    Read the article

  • Testability &amp; Entity Framework 4.0

    This white paper describes and demonstrates how to write testable code with the ADO.NET Entity Framework 4.0 and Visual Studio 2010. This paper does not try to focus on a specific testing methodology, like test-driven design (TDD) or behavior-driven design (BDD). Instead this paper will focus on how to write code that uses the ADO.NET Entity Framework yet remains easy to isolate and test in an automated fashion. Well look at common design patterns that facilitate testing in data access scenarios...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Screen gets garbled on some web sites

    - by user10565
    I have a Gateway notebook with graphics card 01:05.0 VGA compatible controller: ATI Technologies Inc RS690M [Radeon X1200 Series] with open source driver Linux version 2.6.32 -28 - generic. No other operating system on the computer. When I am using firefox to browse the web, everything normally works just fine except that when I attempt to access some particular web pages the screen completely messes up going mostly white with various streaks, etc., although I can access other pages of the same site without problems. When I run the cursor over the garbled screen, bits of the image recompose themselves, at least partially, and I can continue to open the applications window, or turn the computer off, or open the terminal, or take screen shots, etc., although all menus are unreadable. Also, when I zoom in on Google Earth the screen completely messes up. At all other times, there are no apparent problems. Any ideas?

    Read the article

  • how can I specify interleaved vertex attributes and vertex indices

    - by freefallr
    I'm writing a generic ShaderProgram class that compiles a set of Shader objects, passes args to the shader (like vertex position, vertex normal, tex coords etc), then links the shader components into a shader program, for use with glDrawArrays. My vertex data already exists in a VertexBufferObject that uses the following data structure to create a vertex buffer: class CustomVertex { public: float m_Position[3]; // x, y, z // offset 0, size = 3*sizeof(float) float m_TexCoords[2]; // u, v // offset 3*sizeof(float), size = 2*sizeof(float) float m_Normal[3]; // nx, ny, nz; float colour[4]; // r, g, b, a float padding[20]; // padded for performance }; I've already written a working VertexBufferObject class that creates a vertex buffer object from an array of CustomVertex objects. This array is said to be interleaved. It renders successfully with the following code: void VertexBufferObject::Draw() { if( ! m_bInitialized ) return; glBindBuffer( GL_ARRAY_BUFFER, m_nVboId ); glBindBuffer( GL_ELEMENT_ARRAY_BUFFER, m_nVboIdIndex ); glEnableClientState( GL_VERTEX_ARRAY ); glEnableClientState( GL_TEXTURE_COORD_ARRAY ); glEnableClientState( GL_NORMAL_ARRAY ); glEnableClientState( GL_COLOR_ARRAY ); glVertexPointer( 3, GL_FLOAT, sizeof(CustomVertex), ((char*)NULL + 0) ); glTexCoordPointer(3, GL_FLOAT, sizeof(CustomVertex), ((char*)NULL + 12)); glNormalPointer(GL_FLOAT, sizeof(CustomVertex), ((char*)NULL + 20)); glColorPointer(3, GL_FLOAT, sizeof(CustomVertex), ((char*)NULL + 32)); glDrawElements( GL_TRIANGLES, m_nNumIndices, GL_UNSIGNED_INT, ((char*)NULL + 0) ); glDisableClientState( GL_VERTEX_ARRAY ); glDisableClientState( GL_TEXTURE_COORD_ARRAY ); glDisableClientState( GL_NORMAL_ARRAY ); glDisableClientState( GL_COLOR_ARRAY ); glBindBuffer( GL_ARRAY_BUFFER, 0 ); glBindBuffer( GL_ELEMENT_ARRAY_BUFFER, 0 ); } Back to the Vertex Array Object though. My code for creating the Vertex Array object is as follows. This is performed before the ShaderProgram runtime linking stage, and no glErrors are reported after its steps. // Specify the shader arg locations (e.g. their order in the shader code) for( int n = 0; n < vShaderArgs.size(); n ++) glBindAttribLocation( m_nProgramId, n, vShaderArgs[n].sFieldName.c_str() ); // Create and bind to a vertex array object, which stores the relationship between // the buffer and the input attributes glGenVertexArrays( 1, &m_nVaoHandle ); glBindVertexArray( m_nVaoHandle ); // Enable the vertex attribute array (we're using interleaved array, since its faster) glBindBuffer( GL_ARRAY_BUFFER, vShaderArgs[0].nVboId ); glBindBuffer( GL_ELEMENT_ARRAY_BUFFER, vShaderArgs[0].nVboIndexId ); // vertex data for( int n = 0; n < vShaderArgs.size(); n ++ ) { glEnableVertexAttribArray(n); glVertexAttribPointer( n, vShaderArgs[n].nFieldSize, GL_FLOAT, GL_FALSE, vShaderArgs[n].nStride, (GLubyte *) NULL + vShaderArgs[n].nFieldOffset ); AppLog::Ref().OutputGlErrors(); } This doesn't render correctly at all. I get a pattern of white specks onscreen, in the shape of the terrain rectangle, but there are no regular lines etc. Here's the code I use for rendering: void ShaderProgram::Draw() { using namespace AntiMatter; if( ! m_nShaderProgramId || ! m_nVaoHandle ) { AppLog::Ref().LogMsg("ShaderProgram::Draw() Couldn't draw object, as initialization of ShaderProgram is incomplete"); return; } glUseProgram( m_nShaderProgramId ); glBindVertexArray( m_nVaoHandle ); glDrawArrays( GL_TRIANGLES, 0, m_nNumTris ); glBindVertexArray(0); glUseProgram(0); } Can anyone see errors or omissions in either the VAO creation code or rendering code? thanks!

    Read the article

  • FBX Importer - Texture Name

    - by CmasterG
    I have a problem with the FBX SDK. I read in the data for the vertex position and the uv coordinates. It works fine, but now I want to read for each polygon to which texture it belongs, so that I can have models with multiple textures. Can anyone tell me how I can get the texture name (file name) for my polygon. My code to read in vertex position and uv coordinates is the following: int i, j, lPolygonCount = pMesh->GetPolygonCount(); FbxVector4* lControlPoints = pMesh->GetControlPoints(); int vertexId = 0; for (i = 0; i < lPolygonCount; i++) { int lPolygonSize = pMesh->GetPolygonSize(i); for (j = 0; j < lPolygonSize; j++) { int lControlPointIndex = pMesh->GetPolygonVertex(i, j); FbxVector4 pos = lControlPoints[lControlPointIndex]; current_model[vertex_index].x = pos.mData[0] - pivot_offset[0]; current_model[vertex_index].y = pos.mData[1] - pivot_offset[1]; current_model[vertex_index].z = pos.mData[2]- pivot_offset[2]; FbxVector4 vertex_normal; pMesh->GetPolygonVertexNormal(i,j, vertex_normal); current_model[vertex_index].nx = vertex_normal.mData[0]; current_model[vertex_index].ny = vertex_normal.mData[1]; current_model[vertex_index].nz = vertex_normal.mData[2]; //read in UV data FbxStringList lUVSetNameList; pMesh->GetUVSetNames(lUVSetNameList); //get lUVSetIndex-th uv set const char* lUVSetName = lUVSetNameList.GetStringAt(0); const FbxGeometryElementUV* lUVElement = pMesh->GetElementUV(lUVSetName); if(!lUVElement) continue; // only support mapping mode eByPolygonVertex and eByControlPoint if( lUVElement->GetMappingMode() != FbxGeometryElement::eByPolygonVertex && lUVElement->GetMappingMode() != FbxGeometryElement::eByControlPoint ) return; //index array, where holds the index referenced to the uv data const bool lUseIndex = lUVElement->GetReferenceMode() != FbxGeometryElement::eDirect; const int lIndexCount= (lUseIndex) ? lUVElement->GetIndexArray().GetCount() : 0; FbxVector2 lUVValue; //get the index of the current vertex in control points array int lPolyVertIndex = pMesh->GetPolygonVertex(i,j); //the UV index depends on the reference mode //int lUVIndex = lUseIndex ? lUVElement->GetIndexArray().GetAt(lPolyVertIndex) : lPolyVertIndex; int lUVIndex = pMesh->GetTextureUVIndex(i, j); lUVValue = lUVElement->GetDirectArray().GetAt(lUVIndex); current_model[vertex_index].tu = (float)lUVValue.mData[0]; current_model[vertex_index].tv = (float)lUVValue.mData[1]; vertex_index ++; } } float v1[3], v2[3], v3[3]; v1[0] = current_model[vertex_index - 3].x; v1[1] = current_model[vertex_index - 3].y; v1[2] = current_model[vertex_index - 3].z; v2[0] = current_model[vertex_index - 2].x; v2[1] = current_model[vertex_index - 2].y; v2[2] = current_model[vertex_index - 2].z; v3[0] = current_model[vertex_index - 1].x; v3[1] = current_model[vertex_index - 1].y; v3[2] = current_model[vertex_index - 1].z; collision_model->addTriangle(v1,v2,v3);

    Read the article

  • Implicit theme error:The property 'Content' was not found in type 'System.Windows.Controls.Control'.

    - by Mark
    I have got an error while trying to upgrade our large project to SL4. I didn't write the original theme and my theme knowlege isn't great. In my demo app I have a Label and a LabelHeader(which i have created and is just a derived class from Label with DefaultStyleKey = typeof(LabelHeader); I am styling lthem like this: <Style TargetType="themeControls:LabelHeader"> <Setter Property="Template"> <Setter.Value> <ControlTemplate> <DataInput:Label FontSize="{TemplateBinding FontSize}" FontFamily="{TemplateBinding FontFamily}" Foreground="{TemplateBinding Foreground}" Content="{TemplateBinding Content}"/> </ControlTemplate> </Setter.Value> </Setter> <Setter Property="FontFamily" Value="Tahoma"/> <Setter Property="FontSize" Value="20"/> <Setter Property="Foreground" Value="Red"/> </Style> This works in SL3 but in SL4 I get: Error: Unhandled Error in Silverlight Application Code: 2500 Category: ParserError Message: The property 'Content' was not found in type 'System.Windows.Controls.Control'. File: Line: 9 Position: 168 If I change this: Content="{TemplateBinding Content}" to Content="XXX" Then there is no error but , of course, I get XXX in my label rather than the content I set in XAML on the page Any ideas how I can get this working? Demo project here: http://walkersretreat.co.nz/files/ThemeIssue.zip (Apologies for reposting, I have so far got no answers over here: http://forums.silverlight.net/forums/p/183380/415930.aspx#415930)

    Read the article

  • Custom Html Helper is not working in asp.net MVC 2.0

    - by chobo2
    Hey I was using this custom html helper in asp.net mvc 1.0 but now I am trying to use it in a 2.0 project and it crashes http://blog.pagedesigners.co.nz/archive/2009/07/15/asp.net-mvc-ndash-validation-summary-with-2-forms-amp-1.aspx This is the error I get. System.MissingMethodException was unhandled by user code Message=Method not found: 'System.String System.Web.Mvc.Html.ValidationExtensions.ValidationSummary(System.Web.Mvc.HtmlHelper)'. Source=CustomHtmlHelpers StackTrace: at CustomHtmlHelpers.ActionValidationSummaryHelper.ActionValidationSummary(HtmlHelper html, String action) at ASP.views_signin_signin_aspx.__RenderContent2(HtmlTextWriter __w, Control parameterContainer) in SignIn.aspx:line 23 at System.Web.UI.Control.RenderChildrenInternal(HtmlTextWriter writer, ICollection children) at System.Web.UI.Control.RenderChildren(HtmlTextWriter writer) at System.Web.UI.Control.Render(HtmlTextWriter writer) at System.Web.UI.Control.RenderControlInternal(HtmlTextWriter writer, ControlAdapter adapter) at System.Web.UI.Control.RenderControl(HtmlTextWriter writer, ControlAdapter adapter) at System.Web.UI.Control.RenderControl(HtmlTextWriter writer) at ASP.views_shared_site_master.__Render__control1(HtmlTextWriter __w, Control parameterContainer) Site.Master:line 64 at System.Web.UI.Control.RenderChildrenInternal(HtmlTextWriter writer, ICollection children) at System.Web.UI.Control.RenderChildren(HtmlTextWriter writer) at System.Web.UI.Control.Render(HtmlTextWriter writer) at System.Web.UI.Control.RenderControlInternal(HtmlTextWriter writer, ControlAdapter adapter) at System.Web.UI.Control.RenderControl(HtmlTextWriter writer, ControlAdapter adapter) at System.Web.UI.Control.RenderControl(HtmlTextWriter writer) at System.Web.UI.Control.RenderChildrenInternal(HtmlTextWriter writer, ICollection children) at System.Web.UI.Control.RenderChildren(HtmlTextWriter writer) at System.Web.UI.Page.Render(HtmlTextWriter writer) at System.Web.Mvc.ViewPage.Render(HtmlTextWriter writer) at System.Web.UI.Control.RenderControlInternal(HtmlTextWriter writer, ControlAdapter adapter) at System.Web.UI.Control.RenderControl(HtmlTextWriter writer, ControlAdapter adapter) at System.Web.UI.Control.RenderControl(HtmlTextWriter writer) at System.Web.UI.Page.ProcessRequestMain(Boolean includeStagesBeforeAsyncPoint, Boolean includeStagesAfterAsyncPoint) InnerException: My other html helpers in the same library do work. I added the namespace into the webconfig.

    Read the article

  • How do you dynamically allocate a contiguous 3D array in C?

    - by Derek
    In C, I want to loop through an array in this order for(int z = 0; z < NZ; z++) for(int x = 0; x < NX; x++) for(int y = 0; y < NY; y++) 3Darray[x][y][z] = 100; How do I create this array in such a way that 3Darray[0][1][0] comes right before 3Darray[0][2][0] in memory? I can get an initialization to work that gives me "z-major" ordering, but I really want a y-major ordering for this 3d array This is the code I have been trying to use: char *space; char ***Arr3D; int y, z; ptrdiff_t diff; space = malloc(X_DIM * Y_DIM * Z_DIM * sizeof(char)) Arr3D = malloc(Z_DIM * sizeof(char **)); for (z = 0; z < Z_DIM; z++) { Arr3D[z] = malloc(Y_DIM * sizeof(char *)); for (y = 0; y < Y_DIM; y++) { Arr3D[z][y] = space + (z*(X_DIM * Y_DIM) + y*X_DIM); } }

    Read the article

  • .NET CF -- Set Form height based on InputPanel state

    - by user354941
    Hi, So, I've got a C# project for Windows Mobile phones and I'm trying to work with the InputPanel. Specifically, I've got one form with a stack of Labels and TextBoxes that collect user input. I have an InputPanel that alerts me when the user opens the SIP. Everything works fine so far. When I get messages that the SIP status has changed, I want to change the height of the Form, which doesn't seem possible. Here's my event handler for my InputPanel: void m_InputPanel_EnabledChanged(object sender, EventArgs e) { // :( this assignment operation doesn't work and it doesn't this.ClientSize = inputPanel1.VisibleDesktop.Size; // doesn't work this.Size = inputPanel1.VisibleDesktop.Size; // assignment operation works, but isn't very useful this.visibleHeight = inputPanel1.VisibleDesktop.Height; this.InitializeUI(); } When I say that the assignment operation doesn't work, I mean that the values don't change in the debugger. I can understand that maybe I can't change the size of a Form, but I can't understand why trying to change it wouldn't throw an exception, or give a compiler error. I have my Form WindowState set to Normal instead of Maximized, but it doesn't make a difference. Also, I have read http://www.christec.co.nz/blog/archives/42 this page that tells me how I'm supposed to do this, but I can't easily put all of my controls in a Panel because I'm using a bunch of custom stuff to do alpha background controls.

    Read the article

  • jQuery: How can I animate to a taller height with the height being added to the top of the element?

    - by Jannis
    Hi, I have a simple problem but I am not sure how to solve it. Basically I have some hidden content that, once expanded, requires a height of 99px. While collapsed the element holding it section#newsletter is set to be 65px in height. LIVE EXAMPLE: http://dev.supply.net.nz/asap-finance-static/ On the click of a#signup_form the section#newsletter is expanded to 99px using the following jQuery: $('#newsletter_form').hide(); // hide the initial form fields $('#form_signup').click(function(event) { event.preventDefault(); // stop click // animate $('section#newsletter').animate({height: 99}, 400, function() { $('#newsletter_form').show(300); }) }); All this works great except for the fact that this element sits in a sticky footer so its initial height is used to position it. Animating the height of this element causes scrollbars on the browser, because the 34px added are added to the bottom of the element, so my question: How can I add these 34px to the top of the element so the height expands upwards into the body not downwards? Thanks for reading, I look forward to your help and suggestions. Jannis

    Read the article

  • Time delay an external RSS feed

    - by x3ja
    I subscribe to a number of RSS feeds, mostly from within my own timezone (UK: currently GMT+1, a.k.a BST). However I'm also interested in news from New Zealand (currently GMT+12). My problem is caused by my addiction to needing to keep my unread count at, or near, zero. When I load up my RSS reader in the mornings it has gathered all the NZ news at once (normally around 100 items) and I feel compelled either to read them all or to mark them all as read to feed my need for zero-unread-count. I figured a good solution to this would be to time delay the RSS feed somehow, so I would be drip-fed the stories at their time +12 hours, so I could read them through the day as they come in. So my question (or, rather, questions): Does such a thing exist currently & what is it? (no point reworking the wheel) If not: What would be the best way to approach doing this myself? I have access to a Linux web server on which I can run scripts, create databases, store files etc, so there should be a way... I'm most conversant in perl and have done a little fiddling with XML within that, so would naturally process ... or is there some simpler way to do it that I'm missing?

    Read the article

  • PHP SDK for Facebook: Uploading an Image for an Event created using the Graph API

    - by wenbert
    Can anyone shed some light on my problem? <?php $config = array(); $config['appId'] = "foo"; $config['secret'] = "bar"; $config['cookie'] = true; $config['fileUpload'] = true; $facebook = new Facebook($config); $eventParams = array( "privacy_type" => $this->request->data['Event']['privacy'], "name" => $this->request->data['Event']['event'], "description" => $this->request->data['Event']['details'], "start_time" => $this->request->data['Event']['when'], "country" => "NZ" ); //around 300x300 pixels //I have set the permissions to Everyone $imgpath = "C:\\Yes\\Windows\\Path\\Photo_for_the_event_app.jpg"; $eventParams["@file.jpg"] = "@".$imgpath; $fbEvent = $facebook->api("me/events", "POST", $eventParams); var_dump($fbEvent); //I get the event id I also have this in my "scope" when the user is asked to Allow the app to post on his behalf: user_about_me,email,publish_stream,create_event,photo_upload This works. It creates the event with all the details I have specified. EXCEPT for the event image. I have been to most of Stackoverflow posts related to my problem but all of them are not working for me. (EG: http://stackoverflow.com/a/4245260/66767) I also do not get any error. Any ideas? THanks!

    Read the article

  • Disable specific dates in jquery

    - by rvdb86
    I am using the jquery date picker found here: http://www.kelvinluck.com/assets/jquery/datePicker/v2/demo/datePickerStartEnd.html that allows the user to pick a start and end date. However I want to be able to disable specific dates. I tried to implement the code found here: stackoverflow.com/questions/501943/can-the-jquery-ui-datepicker-be-made-to-disable-saturdays-and-sundays-and-holida that discusses disabling national holidays. Here is my complete code: $(function() { $('.date-pick').datePicker({ beforeShowDay: nationalDays}) // $(".selector").datepicker({ beforeShowDay: nationalDays}) var natDays = [ [1, 26, 'au'], [2, 6, 'nz'], [3, 17, 'ie'], [4, 27, 'za'], [5, 25, 'ar'], [6, 6, 'se'], [7, 4, 'us'], [8, 17, 'id'], [9, 7, 'br'], [10, 1, 'cn'], [11, 22, 'lb'], [12, 12, 'ke'] ]; function nationalDays(date) { for (i = 0; i < natDays.length; i++) { if (date.getMonth() == natDays[i][0] - 1 && date.getDate() == natDays[i][1]) { return [false, natDays[i][2] + '_day']; } } return [true, '']; } $('#start-date').bind( 'dpClosed', function(e, selectedDates) { var d = selectedDates[0]; if (d) { d = new Date(d); $('#end-date').dpSetStartDate(d.addDays(1).asString()); } } ); $('#end-date').bind( 'dpClosed', function(e, selectedDates) { var d = selectedDates[0]; if (d) { d = new Date(d); $('#start-date').dpSetEndDate(d.addDays(-1).asString()); } } ); }); But this does not seem to work. I would really appreciate any assistance with solving this!

    Read the article

  • Python: need to get energies of charge pairs.

    - by Two786
    I am new to python. I have to make a program for a project that takes a PDB format file as input and returns a list of all the intra-chain and inter-chain charge pairs and their energies (using coulomb’s law assuming a dielectric constant of (?) of 40.0). For simplicity, the charged residues for this program are just Arg (CZ), Lys (NZ), Asp (CG) and Glu (CD) with the charge bearing atoms for each indicated in parentheses. The program should report any attractive or repulsive interactions within 8.0 Å. Here is some additional information needed for the program. Eij = energy of interaction between atoms i and j in kilocalories/mole (kcals/mol) qi = charge for atom i (+1 for Lys or Arg, -1 for Glu or Asp) rij = distance between atoms i and j in angstroms using the distance formula The output should adhere to the following format: First residue : Second residue Distance Energy Lys 10 Chain A: ASP 46 Chain A D= 4.76 ang E= -2.32 kcals/mol (For some reason I can't organize the top two rows, but the first row should be lables and below it the corresponding values.) I really have no idea how to tackle this problem, any and all help is greatly appreciated. I hope this is the right place to ask. Thank you in advance. Using python 2.5

    Read the article

  • jquery select this parent from inside image.

    - by michael
    I have the following code, i'm trying to make a loader appear then the image fade in once it is loaded. I want to make the function reusable so i can just add an image to a div with the loader id and it will work. I cant figure out how to select the parent loader div from inside the image. the commented line works fine but, i think that will select all divs. i just want to select the parent loader div. can anyone help, thanks. <div id="loader" class="loading"> <img src="http://www.inhousedesign.co.nz/images10/caravan_01.jpg" style="display: none;"/> </div> $("#loader").each(function(){ var source = $(this).find("img").attr('src'); var img = new Image(); $(img).load(function(){ $(this).hide(); //$("#loader").removeClass('loading').append(this); $(this).parents("div:first").removeClass('loading').append(this); $(this).fadeIn(800); }).attr('src', source); });

    Read the article

  • Cloning ID3DXMesh with declration that has 12 floats breaks?

    - by meds
    I have the following vertex declration: struct MESHVERTInstanced { float x, y, z; // Position float nx, ny, nz; // Normal float tu, tv; // Texcoord float idx; // index of the vertex! float tanx, tany, tanz; // The tangent const static D3DVERTEXELEMENT9 Decl[6]; static IDirect3DVertexDeclaration9* meshvertinstdecl; }; And I declare it as such: const D3DVERTEXELEMENT9 MESHVERTInstanced::Decl[] = { { 0, 0, D3DDECLTYPE_FLOAT3, D3DDECLMETHOD_DEFAULT, D3DDECLUSAGE_POSITION, 0 }, { 0, 12, D3DDECLTYPE_FLOAT3, D3DDECLMETHOD_DEFAULT, D3DDECLUSAGE_NORMAL, 0 }, { 0, 24, D3DDECLTYPE_FLOAT2, D3DDECLMETHOD_DEFAULT, D3DDECLUSAGE_TEXCOORD, 0 }, { 0, 32, D3DDECLTYPE_FLOAT1, D3DDECLMETHOD_DEFAULT, D3DDECLUSAGE_TEXCOORD, 1 }, { 0, 36, D3DDECLTYPE_FLOAT3, D3DDECLMETHOD_DEFAULT, D3DDECLUSAGE_TANGENT, 0 }, D3DDECL_END() }; What I try to do next is copy an ID3DXMesh into another one with the new vertex declaration as such: model->CloneMesh( model->GetOptions(), MESHVERTInstanced::Decl, gd3dDevice, &pTempMesh ); When I try to get the FVF size of pTempMesh (D3DXGetFVFVertexSize(pTempMesh-GetFVF())) I get '0' though the size should be 48. The whole thing is fine if I don't have the last declaration, '{ 0, 36, D3DDECLTYPE_FLOAT3, D3DDECLMETHOD_DEFAULT, D3DDECLUSAGE_TANGENT, 0 },' in it and the CloneMesh function does not return a FAIL. I've also tried using different declarations such as D3DDECLUSAGE_TEXCOORD and that has worked fine, returning a size of 48. Is there something specific about D3DDECLUSAGE_TANGENT I don't know? I'm at a complete loss as to why this isn't working...

    Read the article

  • Guidance: A Branching strategy for Scrum Teams

    - by Martin Hinshelwood
    Having a good branching strategy will save your bacon, or at least your code. Be careful when deviating from your branching strategy because if you do, you may be worse off than when you started! This is one possible branching strategy for Scrum teams and I will not be going in depth with Scrum but you can find out more about Scrum by reading the Scrum Guide and you can even assess your Scrum knowledge by having a go at the Scrum Open Assessment. You can also read SSW’s Rules to Better Scrum using TFS which have been developed during our own Scrum implementations. Acknowledgements Bill Heys – Bill offered some good feedback on this post and helped soften the language. Note: Bill is a VS ALM Ranger and co-wrote the Branching Guidance for TFS 2010 Willy-Peter Schaub – Willy-Peter is an ex Visual Studio ALM MVP turned blue badge and has been involved in most of the guidance including the Branching Guidance for TFS 2010 Chris Birmele – Chris wrote some of the early TFS Branching and Merging Guidance. Dr Paul Neumeyer, Ph.D Parallel Processes, ScrumMaster and SSW Solution Architect – Paul wanted to have feature branches coming from the release branch as well. We agreed that this is really a spin-off that needs own project, backlog, budget and Team. Scenario: A product is developed RTM 1.0 is released and gets great sales.  Extra features are demanded but the new version will have double to price to pay to recover costs, work is approved by the guys with budget and a few sprints later RTM 2.0 is released.  Sales a very low due to the pricing strategy. There are lots of clients on RTM 1.0 calling out for patches. As I keep getting Reverse Integration and Forward Integration mixed up and Bill keeps slapping my wrists I thought I should have a reminder: You still seemed to use reverse and/or forward integration in the wrong context. I would recommend reviewing your document at the end to ensure that it agrees with the common understanding of these terms merge (forward integration) from parent to child (same direction as the branch), and merge  (reverse integration) from child to parent (the reverse direction of the branch). - one of my many slaps on the wrist from Bill Heys.   As I mentioned previously we are using a single feature branching strategy in our current project. The single biggest mistake developers make is developing against the “Main” or “Trunk” line. This ultimately leads to messy code as things are added and never finished. Your only alternative is to NEVER check in unless your code is 100%, but this does not work in practice, even with a single developer. Your ADD will kick in and your half-finished code will be finished enough to pass the build and the tests. You do use builds don’t you? Sadly, this is a very common scenario and I have had people argue that branching merely adds complexity. Then again I have seen the other side of the universe ... branching  structures from he... We should somehow convince everyone that there is a happy between no-branching and too-much-branching. - Willy-Peter Schaub, VS ALM Ranger, Microsoft   A key benefit of branching for development is to isolate changes from the stable Main branch. Branching adds sanity more than it adds complexity. We do try to stress in our guidance that it is important to justify a branch, by doing a cost benefit analysis. The primary cost is the effort to do merges and resolve conflicts. A key benefit is that you have a stable code base in Main and accept changes into Main only after they pass quality gates, etc. - Bill Heys, VS ALM Ranger & TFS Branching Lead, Microsoft The second biggest mistake developers make is branching anything other than the WHOLE “Main” line. If you branch parts of your code and not others it gets out of sync and can make integration a nightmare. You should have your Source, Assets, Build scripts deployment scripts and dependencies inside the “Main” folder and branch the whole thing. Some departments within MSFT even go as far as to add the environments used to develop the product in there as well; although I would not recommend that unless you have a massive SQL cluster to house your source code. We tried the “add environment” back in South-Africa and while it was “phenomenal”, especially when having to switch between environments, the disk storage and processing requirements killed us. We opted for virtualization to skin this cat of keeping a ready-to-go environment handy. - Willy-Peter Schaub, VS ALM Ranger, Microsoft   I think people often think that you should have separate branches for separate environments (e.g. Dev, Test, Integration Test, QA, etc.). I prefer to think of deploying to environments (such as from Main to QA) rather than branching for QA). - Bill Heys, VS ALM Ranger & TFS Branching Lead, Microsoft   You can read about SSW’s Rules to better Source Control for some additional information on what Source Control to use and how to use it. There are also a number of branching Anti-Patterns that should be avoided at all costs: You know you are on the wrong track if you experience one or more of the following symptoms in your development environment: Merge Paranoia—avoiding merging at all cost, usually because of a fear of the consequences. Merge Mania—spending too much time merging software assets instead of developing them. Big Bang Merge—deferring branch merging to the end of the development effort and attempting to merge all branches simultaneously. Never-Ending Merge—continuous merging activity because there is always more to merge. Wrong-Way Merge—merging a software asset version with an earlier version. Branch Mania—creating many branches for no apparent reason. Cascading Branches—branching but never merging back to the main line. Mysterious Branches—branching for no apparent reason. Temporary Branches—branching for changing reasons, so the branch becomes a permanent temporary workspace. Volatile Branches—branching with unstable software assets shared by other branches or merged into another branch. Note   Branches are volatile most of the time while they exist as independent branches. That is the point of having them. The difference is that you should not share or merge branches while they are in an unstable state. Development Freeze—stopping all development activities while branching, merging, and building new base lines. Berlin Wall—using branches to divide the development team members, instead of dividing the work they are performing. -Branching and Merging Primer by Chris Birmele - Developer Tools Technical Specialist at Microsoft Pty Ltd in Australia   In fact, this can result in a merge exercise no-one wants to be involved in, merging hundreds of thousands of change sets and trying to get a consolidated build. Again, we need to find a happy medium. - Willy-Peter Schaub on Merge Paranoia Merge conflicts are generally the result of making changes to the same file in both the target and source branch. If you create merge conflicts, you will eventually need to resolve them. Often the resolution is manual. Merging more frequently allows you to resolve these conflicts close to when they happen, making the resolution clearer. Waiting weeks or months to resolve them, the Big Bang approach, means you are more likely to resolve conflicts incorrectly. - Bill Heys, VS ALM Ranger & TFS Branching Lead, Microsoft   Figure: Main line, this is where your stable code lives and where any build has known entities, always passes and has a happy test that passes as well? Many development projects consist of, a single “Main” line of source and artifacts. This is good; at least there is source control . There are however a couple of issues that need to be considered. What happens if: you and your team are working on a new set of features and the customer wants a change to his current version? you are working on two features and the customer decides to abandon one of them? you have two teams working on different feature sets and their changes start interfering with each other? I just use labels instead of branches? That's a lot of “what if’s”, but there is a simple way of preventing this. Branching… In TFS, labels are not immutable. This does not mean they are not useful. But labels do not provide a very good development isolation mechanism. Branching allows separate code sets to evolve separately (e.g. Current with hotfixes, and vNext with new development). I don’t see how labels work here. - Bill Heys, VS ALM Ranger & TFS Branching Lead, Microsoft   Figure: Creating a single feature branch means you can isolate the development work on that branch.   Its standard practice for large projects with lots of developers to use Feature branching and you can check the Branching Guidance for the latest recommendations from the Visual Studio ALM Rangers for other methods. In the diagram above you can see my recommendation for branching when using Scrum development with TFS 2010. It consists of a single Sprint branch to contain all the changes for the current sprint. The main branch has the permissions changes so contributors to the project can only Branch and Merge with “Main”. This will prevent accidental check-ins or checkouts of the “Main” line that would contaminate the code. The developers continue to develop on sprint one until the completion of the sprint. Note: In the real world, starting a new Greenfield project, this process starts at Sprint 2 as at the start of Sprint 1 you would have artifacts in version control and no need for isolation.   Figure: Once the sprint is complete the Sprint 1 code can then be merged back into the Main line. There are always good practices to follow, and one is to always do a Forward Integration from Main into Sprint 1 before you do a Reverse Integration from Sprint 1 back into Main. In this case it may seem superfluous, but this builds good muscle memory into your developer’s work ethic and means that no bad habits are learned that would interfere with additional Scrum Teams being added to the Product. The process of completing your sprint development: The Team completes their work according to their definition of done. Merge from “Main” into “Sprint1” (Forward Integration) Stabilize your code with any changes coming from other Scrum Teams working on the same product. If you have one Scrum Team this should be quick, but there may have been bug fixes in the Release branches. (we will talk about release branches later) Merge from “Sprint1” into “Main” to commit your changes. (Reverse Integration) Check-in Delete the Sprint1 branch Note: The Sprint 1 branch is no longer required as its useful life has been concluded. Check-in Done But you are not yet done with the Sprint. The goal in Scrum is to have a “potentially shippable product” at the end of every Sprint, and we do not have that yet, we only have finished code.   Figure: With Sprint 1 merged you can create a Release branch and run your final packaging and testing In 99% of all projects I have been involved in or watched, a “shippable product” only happens towards the end of the overall lifecycle, especially when sprints are short. The in-between releases are great demonstration releases, but not shippable. Perhaps it comes from my 80’s brain washing that we only ship when we reach the agreed quality and business feature bar. - Willy-Peter Schaub, VS ALM Ranger, Microsoft Although you should have been testing and packaging your code all the way through your Sprint 1 development, preferably using an automated process, you still need to test and package with stable unchanging code. This is where you do what at SSW we call a “Test Please”. This is first an internal test of the product to make sure it meets the needs of the customer and you generally use a resource external to your Team. Then a “Test Please” is conducted with the Product Owner to make sure he is happy with the output. You can read about how to conduct a Test Please on our Rules to Successful Projects: Do you conduct an internal "test please" prior to releasing a version to a client?   Figure: If you find a deviation from the expected result you fix it on the Release branch. If during your final testing or your “Test Please” you find there are issues or bugs then you should fix them on the release branch. If you can’t fix them within the time box of your Sprint, then you will need to create a Bug and put it onto the backlog for prioritization by the Product owner. Make sure you leave plenty of time between your merge from the development branch to find and fix any problems that are uncovered. This process is commonly called Stabilization and should always be conducted once you have completed all of your User Stories and integrated all of your branches. Even once you have stabilized and released, you should not delete the release branch as you would with the Sprint branch. It has a usefulness for servicing that may extend well beyond the limited life you expect of it. Note: Don't get forced by the business into adding features into a Release branch instead that indicates the unspoken requirement is that they are asking for a product spin-off. In this case you can create a new Team Project and branch from the required Release branch to create a new Main branch for that product. And you create a whole new backlog to work from.   Figure: When the Team decides it is happy with the product you can create a RTM branch. Once you have fixed all the bugs you can, and added any you can’t to the Product Backlog, and you Team is happy with the result you can create a Release. This would consist of doing the final Build and Packaging it up ready for your Sprint Review meeting. You would then create a read-only branch that represents the code you “shipped”. This is really an Audit trail branch that is optional, but is good practice. You could use a Label, but Labels are not Auditable and if a dispute was raised by the customer you can produce a verifiable version of the source code for an independent party to check. Rare I know, but you do not want to be at the wrong end of a legal battle. Like the Release branch the RTM branch should never be deleted, or only deleted according to your companies legal policy, which in the UK is usually 7 years.   Figure: If you have made any changes in the Release you will need to merge back up to Main in order to finalise the changes. Nothing is really ever done until it is in Main. The same rules apply when merging any fixes in the Release branch back into Main and you should do a reverse merge before a forward merge, again for the muscle memory more than necessity at this stage. Your Sprint is now nearly complete, and you can have a Sprint Review meeting knowing that you have made every effort and taken every precaution to protect your customer’s investment. Note: In order to really achieve protection for both you and your client you would add Automated Builds, Automated Tests, Automated Acceptance tests, Acceptance test tracking, Unit Tests, Load tests, Web test and all the other good engineering practices that help produce reliable software.     Figure: After the Sprint Planning meeting the process begins again. Where the Sprint Review and Retrospective meetings mark the end of the Sprint, the Sprint Planning meeting marks the beginning. After you have completed your Sprint Planning and you know what you are trying to achieve in Sprint 2 you can create your new Branch to develop in. How do we handle a bug(s) in production that can’t wait? Although in Scrum the only work done should be on the backlog there should be a little buffer added to the Sprint Planning for contingencies. One of these contingencies is a bug in the current release that can’t wait for the Sprint to finish. But how do you handle that? Willy-Peter Schaub asked an excellent question on the release activities: In reality Sprint 2 starts when sprint 1 ends + weekend. Should we not cater for a possible parallelism between Sprint 2 and the release activities of sprint 1? It would introduce FI’s from main to sprint 2, I guess. Your “Figure: Merging print 2 back into Main.” covers, what I tend to believe to be reality in most cases. - Willy-Peter Schaub, VS ALM Ranger, Microsoft I agree, and if you have a single Scrum team then your resources are limited. The Scrum Team is responsible for packaging and release, so at least one run at stabilization, package and release should be included in the Sprint time box. If more are needed on the current production release during the Sprint 2 time box then resource needs to be pulled from Sprint 2. The Product Owner and the Team have four choices (in order of disruption/cost): Backlog: Add the bug to the backlog and fix it in the next Sprint Buffer Time: Use any buffer time included in the current Sprint to fix the bug quickly Make time: Remove a Story from the current Sprint that is of equal value to the time lost fixing the bug(s) and releasing. Note: The Team must agree that it can still meet the Sprint Goal. Cancel Sprint: Cancel the sprint and concentrate all resource on fixing the bug(s) Note: This can be a very costly if the current sprint has already had a lot of work completed as it will be lost. The choice will depend on the complexity and severity of the bug(s) and both the Product Owner and the Team need to agree. In this case we will go with option #2 or #3 as they are uncomplicated but severe bugs. Figure: Real world issue where a bug needs fixed in the current release. If the bug(s) is urgent enough then then your only option is to fix it in place. You can edit the release branch to find and fix the bug, hopefully creating a test so it can’t happen again. Follow the prior process and conduct an internal and customer “Test Please” before releasing. You can read about how to conduct a Test Please on our Rules to Successful Projects: Do you conduct an internal "test please" prior to releasing a version to a client?   Figure: After you have fixed the bug you need to ship again. You then need to again create an RTM branch to hold the version of the code you released in escrow.   Figure: Main is now out of sync with your Release. We now need to get these new changes back up into the Main branch. Do a reverse and then forward merge again to get the new code into Main. But what about the branch, are developers not working on Sprint 2? Does Sprint 2 now have changes that are not in Main and Main now have changes that are not in Sprint 2? Well, yes… and this is part of the hit you take doing branching. But would this scenario even have been possible without branching?   Figure: Getting the changes in Main into Sprint 2 is very important. The Team now needs to do a Forward Integration merge into their Sprint and resolve any conflicts that occur. Maybe the bug has already been fixed in Sprint 2, maybe the bug no longer exists! This needs to be identified and resolved by the developers before they continue to get further out of Sync with Main. Note: Avoid the “Big bang merge” at all costs.   Figure: Merging Sprint 2 back into Main, the Forward Integration, and R0 terminates. Sprint 2 now merges (Reverse Integration) back into Main following the procedures we have already established.   Figure: The logical conclusion. This then allows the creation of the next release. By now you should be getting the big picture and hopefully you learned something useful from this post. I know I have enjoyed writing it as I find these exploratory posts coupled with real world experience really help harden my understanding.  Branching is a tool; it is not a silver bullet. Don’t over use it, and avoid “Anti-Patterns” where possible. Although the diagram above looks complicated I hope showing you how it is formed simplifies it as much as possible.   Technorati Tags: Branching,Scrum,VS ALM,TFS 2010,VS2010

    Read the article

  • SQL SERVER – WRITELOG – Wait Type – Day 17 of 28

    - by pinaldave
    WRITELOG is one of the most interesting wait types. So far we have seen a lot of different wait types, but this log type is associated with log file which makes it interesting to deal with. From Book On-Line: WRITELOG Occurs while waiting for a log flush to complete. Common operations that cause log flushes are checkpoints and transaction commits. WRITELOG Explanation: This wait type is usually seen in the heavy transactional database. When data is modified, it is written both on the log cache and buffer cache. This wait type occurs when data in the log cache is flushing to the disk. During this time, the session has to wait due to WRITELOG. I have recently seen this wait type’s persistence at my client’s place, where one of the long-running transactions was stopped by the user causing it to roll back. In the future, I will see if I could re-create this situation once again on my machine to validate the relation. Reducing WRITELOG wait: There are several suggestions to reduce this wait stats: Move Transaction Log to Separate Disk from mdf and other files. Avoid cursor-like coding methodology and frequent committing of statements. Find the most active file based on IO stall time based on the script written over here. You can also use fn_virtualfilestats to find IO-related issues using the script mentioned over here. Check the IO-related counters (PhysicalDisk:Avg.Disk Queue Length, PhysicalDisk:Disk Read Bytes/sec and PhysicalDisk :Disk Write Bytes/sec) for additional details. Read about them over here. There are two excellent resources by Paul Randal, I suggest you understand the subject from those videos. The links to videos are here and here. Note: The information presented here is from my experience and there is no way that I claim it to be accurate. I suggest reading Book OnLine for further clarification. All the discussion of Wait Stats in this blog is generic and varies from system to system. It is recommended that you test this on a development server before implementing it to a production server. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, SQL Wait Stats, SQL Wait Types, T SQL, Technology

    Read the article

  • Ghost Records, Backups, and Database Compression…With a Pinch of Security Considerations

    - by Argenis
      Today Jeffrey Langdon (@jlangdon) posed on #SQLHelp the following questions: So I set to answer his question, and I said to myself: “Hey, I haven’t blogged in a while, how about I blog about this particular topic?”. Thus, this post was born. (If you have never heard of Ghost Records and/or the Ghost Cleanup Task, go see this blog post by Paul Randal) 1) Do ghost records get copied over in a backup? If you guessed yes, you guessed right. The backup process in SQL Server takes all data as it is on disk – it doesn’t crack the pages open to selectively pick which slots have actual data and which ones do not. The whole page is backed up, regardless of its contents. Even if ghost cleanup has run and processed the ghost records, the slots are not overwritten immediately, but rather until another DML operation comes along and uses them. As a matter of fact, all of the allocated space for a database will be included in a full backup. So, this poses a bit of a security/compliance problem for some of you DBA folk: if you want to take a full backup of a database after you’ve purged sensitive data, you should rebuild all of your indexes (with FILLFACTOR set to 100%). But the empty space on your data file(s) might still contain sensitive data! A SHRINKFILE might help get rid of that (not so) empty space, but that might not be the end of your troubles. You might _STILL_ have (not so) empty space on your files! One approach that you can follow is to export all of the data on your database to another SQL Server instance that does NOT have Instant File Initialization enabled. This can be a tedious and time-consuming process, though. So you have to weigh in your options and see what makes sense for you. Snapshot Replication is another idea that comes to mind. 2) Does Compression get rid of ghost records (2008)? The answer to this is no. The Ghost Records/Ghost Cleanup Task mechanism is alive and well on compressed tables and indexes. You can prove this running a simple script: CREATE DATABASE GhostRecordsTest GO USE GhostRecordsTest GO CREATE TABLE myTable (myPrimaryKey int IDENTITY(1,1) PRIMARY KEY CLUSTERED,                       myWideColumn varchar(1000) NOT NULL DEFAULT 'Default string value')                         ALTER TABLE myTable REBUILD PARTITION = ALL WITH (DATA_COMPRESSION = PAGE) GO INSERT INTO myTable DEFAULT VALUES GO 10 DELETE myTable WHERE myPrimaryKey % 2 = 0 DBCC TRACEON(2514) DBCC CHECKTABLE(myTable) TraceFlag 2514 will make DBCC CHECKTABLE give you an extra tidbit of information on its output. For the above script: “Ghost Record count = 5” Until next time,   -Argenis

    Read the article

  • Ghost Records, Backups, and Database Compression…With a Pinch of Security Considerations

    - by Argenis
      Today Jeffrey Langdon (@jlangdon) posed on #SQLHelp the following questions: So I set to answer his question, and I said to myself: “Hey, I haven’t blogged in a while, how about I blog about this particular topic?”. Thus, this post was born. (If you have never heard of Ghost Records and/or the Ghost Cleanup Task, go see this blog post by Paul Randal) 1) Do ghost records get copied over in a backup? If you guessed yes, you guessed right. The backup process in SQL Server takes all data as it is on disk – it doesn’t crack the pages open to selectively pick which slots have actual data and which ones do not. The whole page is backed up, regardless of its contents. Even if ghost cleanup has run and processed the ghost records, the slots are not overwritten immediately, but rather until another DML operation comes along and uses them. As a matter of fact, all of the allocated space for a database will be included in a full backup. So, this poses a bit of a security/compliance problem for some of you DBA folk: if you want to take a full backup of a database after you’ve purged sensitive data, you should rebuild all of your indexes (with FILLFACTOR set to 100%). But the empty space on your data file(s) might still contain sensitive data! A SHRINKFILE might help get rid of that (not so) empty space, but that might not be the end of your troubles. You might _STILL_ have (not so) empty space on your files! One approach that you can follow is to export all of the data on your database to another SQL Server instance that does NOT have Instant File Initialization enabled. This can be a tedious and time-consuming process, though. So you have to weigh in your options and see what makes sense for you. Snapshot Replication is another idea that comes to mind. 2) Does Compression get rid of ghost records (2008)? The answer to this is no. The Ghost Records/Ghost Cleanup Task mechanism is alive and well on compressed tables and indexes. You can prove this running a simple script: CREATE DATABASE GhostRecordsTest GO USE GhostRecordsTest GO CREATE TABLE myTable (myPrimaryKey int IDENTITY(1,1) PRIMARY KEY CLUSTERED,                       myWideColumn varchar(1000) NOT NULL DEFAULT 'Default string value')                         ALTER TABLE myTable REBUILD PARTITION = ALL WITH (DATA_COMPRESSION = PAGE) GO INSERT INTO myTable DEFAULT VALUES GO 10 DELETE myTable WHERE myPrimaryKey % 2 = 0 DBCC TRACEON(2514) DBCC CHECKTABLE(myTable) TraceFlag 2514 will make DBCC CHECKTABLE give you an extra tidbit of information on its output. For the above script: “Ghost Record count = 5” Until next time,   -Argenis

    Read the article

  • SQL Server Scripts I Use

    - by Bill Graziano
    When I get to a new client I usually find myself using the same set of scripts for maintenance and troubleshooting.  These are all drop in solutions for various maintenance issues. Reindexing.  I use Michelle Ufford’s (SQLFool) re-indexing script.  I like that it has a throttle and only re-indexes when needed.  She also has a variety of other interesting scripts on her blog too. Server Activity.  Adam Machanic is up to version 10 of sp_WhoIsActive.  It’s a great replacement for the sp_who* stored procedures and does so much more.  If a server is acting funny this is one of the first tools I use. Backups.  Tara Kizer has a great little T-SQL script for SQL Server backups.  Wait Stats.  Paul Randal has a great script to display wait stats.  The biggest benefit for me is that his script filters out at least three dozen wait stats that I just don’t care about (for example LAZYWRITER_SLEEP). Update Statistics.  I didn’t find anything I liked so I wrote a simple script to update stats myself.  The big need for me was that it had to run inside a time window and update the oldest statistics first.  Is there a better one? Diagnostic Queries.  Glenn Berry has a huge collection of DMV queries available.  He also just highlighted five of them including two I really like dealing with unused indexes and suggested indexes. Single Use Query Plans.  Kim Tripp has a script that counts the number of single-use query plans.  This should guide you in whether to enable the Optimize for Adhoc Workloads option in SQL Server 2008. Granting Permissions to Developers.  This is one of those scripts I didn’t even know I needed until I needed it.  Kendra Little wrote it to grant a login read-only permission to all the databases.  It also grants view server state and a few other handy permissions.   What else do you use?  What should I add to my list?

    Read the article

< Previous Page | 138 139 140 141 142 143 144 145 146 147 148 149  | Next Page >