Search Results

Search found 14924 results on 597 pages for 'selector performance'.

Page 459/597 | < Previous Page | 455 456 457 458 459 460 461 462 463 464 465 466  | Next Page >

  • SQL Server, temporary tables with truncate vs table variable with delete

    - by Richard
    I have a stored procedure inside which I create a temporary table that typically contains between 1 and 10 rows. This table is truncated and filled many times during the stored procedure. It is truncated as this is faster than delete. Do I get any performance increase by replacing this temporary table with a table variable when I suffer a penalty for using delete (truncate does not work on table variables) Whilst table variables are mainly in memory and are generally faster than temp tables do I loose any benefit by having to delete rather than truncate?

    Read the article

  • Nice name for `decorator' class?

    - by Lajos Nagy
    I would like to separate the API I'm working on into two sections: 'bare-bones' and 'cushy'. The idea is that all method calls in the 'cushy' section could be expressed in terms of the ones in the 'bare-bones' section, that is, they would only serve as convenience methods for the quick-and-dirty. The reason I would like to do this is that very often when people are beginning to use an API for the first time, they are not interested in details and performance: they just want to get it working. Anybody tried anything similar before? I'm particularly interested in naming conventions and organizing the code.

    Read the article

  • Can I do video communication with silverlight 4.0?

    - by tom greene
    With silverlight 4.0, it is possible to show a live video of the user on the screen: Here is the code VideoBrush videoBrush = new VideoBrush(); CaptureSource captureSource = new CaptureSource { VideoCaptureDevice = CaptureDeviceConfiguration.GetAvailableVideoCaptureDevices().First() }; bool b = CaptureDeviceConfiguration.RequestDeviceAccess(); videoBrush.SetSource(captureSource); captureSource.Start(); myrect.Fill = videoBrush; However, I am looking at a way to show the video to someone else - seeing oneself on screen is not that interesting. Is it possible? Do I need my own server? Can I use clowd services to do the communication? Are there performance issues?

    Read the article

  • Handler invocation speed: Objective-C vs virtual functions

    - by Kerido
    I heard that calling a handler (delegate, etc.) in Objective-C can be even faster than calling a virtual function in C++. Is it really correct? If so, how can that be? AFAIK, virtual functions are not that slow to call. At least, this is my understanding of what happens when a virtual function is called: Compute the index of the function pointer location in vtbl. Obtain the pointer to vtbl. Dereference the pointer and obtain the beginning of the array of function pointers. Offset (in pointer scale) the beginning of the array with the index value obtained on step 1. Issue a call instruction. Unfortunately, I don't know Objective-C so it's hard for me to compare performance. But at least, the mechanism of a virtual function call doesn't look that slow, right? How can something other than static function call be faster?

    Read the article

  • Is there any killer application for Ontology/semantics/OWL/RDF yet?

    - by narnirajesh
    Hi Guys, I got interested in semantic technologies after reading a lot of books, blogs and articles on the net saying that it would make data machine-understandable, allow intelligent agents make great reasoning, automated & dynamic service composition etc.. I am still reading the same stuff from 2 years. The number of articles/blogs/semantic-conferences have increased considerably. But I am still unable to see any killer-application. Why is it so? Or is there some application/product (commercial/open-source) already existing, which actually is doing all that being boasted of? To put it more precisely, is there any product that leverages semantic technologies (esp RDF/OWL/SPARQL) and is delivering functionality/performance/maintainability, which would not have been possible with the existing (no-semantic) technologies? Some product that is completely dependent on semantic technologies and really adds value to the customers and generating revenues?

    Read the article

  • PyPy -- How can it possible beat CPython?

    - by Vulcan Eager
    From the Google Open Source Blog: PyPy is a reimplementation of Python in Python, using advanced techniques to try to attain better performance than CPython. Many years of hard work have finally paid off. Our speed results often beat CPython, ranging from being slightly slower, to speedups of up to 2x on real application code, to speedups of up to 10x on small benchmarks. How is this possible? Which Python implementation was used to implement PyPy? CPython? And what are the chances of a PyPyPy or PyPyPyPy beating their score? (On a related note... why would anyone try something like this?)

    Read the article

  • OpenGL basics: calling glDrawElements once per object

    - by Bethor
    Hi all, continuing on from my explorations of the basics of OpenGL (see this question), I'm trying to figure out the basic principles of drawing a scene with OpenGL. I am trying to render a simple cube repeated n times in every direction. My method appears to yield terrible performance : 1000 cubes brings performance below 50fps (on a QuadroFX 1800, roughly a GeForce 9600GT). My method for drawing these cubes is as follows: done once: set up a vertex buffer and array buffer containing my cube vertices in model space set up an array buffer indexing the cube for drawing as 12 triangles done for each frame: update uniform values used by the vertex shader to move all cubes at once done for each cube, for each frame: update uniform values used by the vertex shader to move each cube to its position call glDrawElements to draw the positioned cube Is this a sane method ? If not, how does one go about something like this ? I'm guessing I need to minimize calls to glUniform, glDrawElements, or both, but I'm not sure how to do that. Full code for my little test : (depends on gletools and pyglet) I'm aware that my init code (at least) is really ugly; I'm concerned with the rendering code for each frame right now, I'll move to something a little less insane for the creation of the vertex buffers and such later on. import pyglet from pyglet.gl import * from pyglet.window import key from numpy import deg2rad, tan from gletools import ShaderProgram, FragmentShader, VertexShader, GeometryShader vertexData = [-0.5, -0.5, -0.5, 1.0, -0.5, 0.5, -0.5, 1.0, 0.5, -0.5, -0.5, 1.0, 0.5, 0.5, -0.5, 1.0, -0.5, -0.5, 0.5, 1.0, -0.5, 0.5, 0.5, 1.0, 0.5, -0.5, 0.5, 1.0, 0.5, 0.5, 0.5, 1.0] elementArray = [2, 1, 0, 1, 2, 3,## back face 4, 7, 6, 4, 5, 7,## front face 1, 3, 5, 3, 7, 5,## top face 2, 0, 4, 2, 4, 6,## bottom face 1, 5, 4, 0, 1, 4,## left face 6, 7, 3, 6, 3, 2]## right face def toGLArray(input): return (GLfloat*len(input))(*input) def toGLushortArray(input): return (GLushort*len(input))(*input) def initPerspectiveMatrix(aspectRatio = 1.0, fov = 45): frustumScale = 1.0 / tan(deg2rad(fov) / 2.0) fzNear = 0.5 fzFar = 300.0 perspectiveMatrix = [frustumScale*aspectRatio, 0.0 , 0.0 , 0.0 , 0.0 , frustumScale, 0.0 , 0.0 , 0.0 , 0.0 , (fzFar+fzNear)/(fzNear-fzFar) , -1.0, 0.0 , 0.0 , (2*fzFar*fzNear)/(fzNear-fzFar), 0.0 ] return perspectiveMatrix class ModelObject(object): vbo = GLuint() vao = GLuint() eao = GLuint() initDone = False verticesPool = [] indexPool = [] def __init__(self, vertices, indexing): super(ModelObject, self).__init__() if not ModelObject.initDone: glGenVertexArrays(1, ModelObject.vao) glGenBuffers(1, ModelObject.vbo) glGenBuffers(1, ModelObject.eao) glBindVertexArray(ModelObject.vao) initDone = True self.numIndices = len(indexing) self.offsetIntoVerticesPool = len(ModelObject.verticesPool) ModelObject.verticesPool.extend(vertices) self.offsetIntoElementArray = len(ModelObject.indexPool) ModelObject.indexPool.extend(indexing) glBindBuffer(GL_ARRAY_BUFFER, ModelObject.vbo) glEnableVertexAttribArray(0) #position glVertexAttribPointer(0, 4, GL_FLOAT, GL_FALSE, 0, 0) glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, ModelObject.eao) glBufferData(GL_ARRAY_BUFFER, len(ModelObject.verticesPool)*4, toGLArray(ModelObject.verticesPool), GL_STREAM_DRAW) glBufferData(GL_ELEMENT_ARRAY_BUFFER, len(ModelObject.indexPool)*2, toGLushortArray(ModelObject.indexPool), GL_STREAM_DRAW) def draw(self): glDrawElements(GL_TRIANGLES, self.numIndices, GL_UNSIGNED_SHORT, self.offsetIntoElementArray) class PositionedObject(object): def __init__(self, mesh, pos, objOffsetUf): super(PositionedObject, self).__init__() self.mesh = mesh self.pos = pos self.objOffsetUf = objOffsetUf def draw(self): glUniform3f(self.objOffsetUf, self.pos[0], self.pos[1], self.pos[2]) self.mesh.draw() w = 800 h = 600 AR = float(h)/float(w) window = pyglet.window.Window(width=w, height=h, vsync=False) window.set_exclusive_mouse(True) pyglet.clock.set_fps_limit(None) ## input forward = [False] left = [False] back = [False] right = [False] up = [False] down = [False] inputs = {key.Z: forward, key.Q: left, key.S: back, key.D: right, key.UP: forward, key.LEFT: left, key.DOWN: back, key.RIGHT: right, key.PAGEUP: up, key.PAGEDOWN: down} ## camera camX = 0.0 camY = 0.0 camZ = -1.0 def simulate(delta): global camZ, camX, camY scale = 10.0 move = scale*delta if forward[0]: camZ += move if back[0]: camZ += -move if left[0]: camX += move if right[0]: camX += -move if up[0]: camY += move if down[0]: camY += -move pyglet.clock.schedule(simulate) @window.event def on_key_press(symbol, modifiers): global forward, back, left, right, up, down if symbol in inputs.keys(): inputs[symbol][0] = True @window.event def on_key_release(symbol, modifiers): global forward, back, left, right, up, down if symbol in inputs.keys(): inputs[symbol][0] = False ## uniforms for shaders camOffsetUf = GLuint() objOffsetUf = GLuint() perspectiveMatrixUf = GLuint() camRotationUf = GLuint() program = ShaderProgram( VertexShader(''' #version 330 layout(location = 0) in vec4 objCoord; uniform vec3 objOffset; uniform vec3 cameraOffset; uniform mat4 perspMx; void main() { mat4 translateCamera = mat4(1.0f, 0.0f, 0.0f, 0.0f, 0.0f, 1.0f, 0.0f, 0.0f, 0.0f, 0.0f, 1.0f, 0.0f, cameraOffset.x, cameraOffset.y, cameraOffset.z, 1.0f); mat4 translateObject = mat4(1.0f, 0.0f, 0.0f, 0.0f, 0.0f, 1.0f, 0.0f, 0.0f, 0.0f, 0.0f, 1.0f, 0.0f, objOffset.x, objOffset.y, objOffset.z, 1.0f); vec4 modelCoord = objCoord; vec4 positionedModel = translateObject*modelCoord; vec4 cameraPos = translateCamera*positionedModel; gl_Position = perspMx * cameraPos; }'''), FragmentShader(''' #version 330 out vec4 outputColor; const vec4 fillColor = vec4(1.0f, 1.0f, 1.0f, 1.0f); void main() { outputColor = fillColor; }''') ) shapes = [] def init(): global camOffsetUf, objOffsetUf with program: camOffsetUf = glGetUniformLocation(program.id, "cameraOffset") objOffsetUf = glGetUniformLocation(program.id, "objOffset") perspectiveMatrixUf = glGetUniformLocation(program.id, "perspMx") glUniformMatrix4fv(perspectiveMatrixUf, 1, GL_FALSE, toGLArray(initPerspectiveMatrix(AR))) obj = ModelObject(vertexData, elementArray) nb = 20 for i in range(nb): for j in range(nb): for k in range(nb): shapes.append(PositionedObject(obj, (float(i*2), float(j*2), float(k*2)), objOffsetUf)) glEnable(GL_CULL_FACE) glCullFace(GL_BACK) glFrontFace(GL_CW) glEnable(GL_DEPTH_TEST) glDepthMask(GL_TRUE) glDepthFunc(GL_LEQUAL) glDepthRange(0.0, 1.0) glClearDepth(1.0) def update(dt): print pyglet.clock.get_fps() pyglet.clock.schedule_interval(update, 1.0) @window.event def on_draw(): with program: pyglet.clock.tick() glClear(GL_COLOR_BUFFER_BIT|GL_DEPTH_BUFFER_BIT) glUniform3f(camOffsetUf, camX, camY, camZ) for shape in shapes: shape.draw() init() pyglet.app.run()

    Read the article

  • Can't connect to samba

    - by Rick
    Windows 7, connecting to Samba shares I have a follow up question from the link above. I am running Samba 3.0.23d on FreeBSD is release 7.1 I changed the policies as described above but still cannot connect to the samba server with the windows 7 or a server 2008. I feel it is a problem with recognizing the new machines on the network. the windows machines can see the samba server, but cannot connect to it or view any of the files. After changing the security policies the samba server asked for network id and password but would not allow the machine to connect, said they were unknown username or bad password. Here is my current config file. there is no sign of encryption anywhere, should I just add the line? not sure what that would do elsewhere. Workgroup = WWOFFSET server string = WWO File Server (%v) security = server username map = /usr/local/etc/smb.users hosts allow = 10. 127. # If you want to automatically load your printer list rather # than setting them up individually then you'll need this ; load printers = yes # you may wish to override the location of the printcap file ; printcap name = /etc/printcap # on SystemV system setting printcap name to lpstat should allow # you to automatically obtain a printer list from the SystemV spool # system ; printcap name = lpstat # It should not be necessary to specify the print system type unless # it is non-standard. Currently supported print systems include: # bsd, cups, sysv, plp, lprng, aix, hpux, qnx ; printing = cups # Uncomment this if you want a guest account, you must add this to /etc/passwd # otherwise the user "nobody" is used ; guest account = pcguest # this tells Samba to use a separate log file for each machine # that connects log file = /var/log/samba/log.%m # Put a capping on the size of the log files (in Kb). max log size = 50 # Use password server option only with security = server # The argument list may include: # password server = My_PDC_Name [My_BDC_Name] [My_Next_BDC_Name] # or to auto-locate the domain controller/s # password server = * ; password server = <NT-Server-Name> password server = SERVER0 # Use the realm option only with security = ads # Specifies the Active Directory realm the host is part of ; realm = MY_REALM # Backend to store user information in. New installations should # use either tdbsam or ldapsam. smbpasswd is available for backwards # compatibility. tdbsam requires no further configuration. ; passdb backend = tdbsam ; passdb backend = smbpasswd # Using the following line enables you to customise your configuration # on a per machine basis. The %m gets replaced with the netbios name # of the machine that is connecting. # Note: Consider carefully the location in the configuration file of # this line. The included file is read at that point. ; include = /usr/local/etc/smb.conf.%m # Most people will find that this option gives better performance. # See the chapter 'Samba performance issues' in the Samba HOWTO Collection # and the manual pages for details. # You may want to add the following on a Linux system: # SO_RCVBUF=8192 SO_SNDBUF=8192 socket options = TCP_NODELAY # Configure Samba to use multiple interfaces # If you have multiple network interfaces then you must list them # here. See the man page for details. ; interfaces = 192.168.12.2/24 192.168.13.2/24 # Browser Control Options: # set local master to no if you don't want Samba to become a master # browser on your network. Otherwise the normal election rules apply ; local master = no # OS Level determines the precedence of this server in master browser # elections. The default value should be reasonable ; os level = 33 # Domain Master specifies Samba to be the Domain Master Browser. This # allows Samba to collate browse lists between subnets. Don't use this # if you already have a Windows NT domain controller doing this job ; domain master = yes # Preferred Master causes Samba to force a local browser election on startup # and gives it a slightly higher chance of winning the election ; preferred master = yes # Enable this if you want Samba to be a domain logon server for # Windows95 workstations. ; domain logons = yes # if you enable domain logons then you may want a per-machine or # per user logon script # run a specific logon batch file per workstation (machine) ; logon script = %m.bat # run a specific logon batch file per username ; logon script = %U.bat # Where to store roving profiles (only for Win95 and WinNT) # %L substitutes for this servers netbios name, %U is username # You must uncomment the [Profiles] share below ; logon path = \\%L\Profiles\%U # Windows Internet Name Serving Support Section: # WINS Support - Tells the NMBD component of Samba to enable it's WINS Server ; wins support = yes # WINS Server - Tells the NMBD components of Samba to be a WINS Client # Note: Samba can be either a WINS Server, or a WINS Client, but NOT both ; wins server = w.x.y.z # WINS Proxy - Tells Samba to answer name resolution queries on # behalf of a non WINS capable client, for this to work there must be # at least one WINS Server on the network. The default is NO. ; wins proxy = yes # DNS Proxy - tells Samba whether or not to try to resolve NetBIOS names # via DNS nslookups. The default is NO. dns proxy = no # charset settings ; display charset = ASCII ; unix charset = ASCII ; dos charset = ASCII # These scripts are used on a domain controller or stand-alone # machine to add or delete corresponding unix accounts ; add user script = /usr/sbin/useradd %u ; add group script = /usr/sbin/groupadd %g ; add machine script = /usr/sbin/adduser -n -g machines -c Machine -d /dev/null -s /bin/false %u ; delete user script = /usr/sbin/userdel %u ; delete user from group script = /usr/sbin/deluser %u %g ; delete group script = /usr/sbin/groupdel %g unix extensions = no

    Read the article

  • Long-running transactions structured approach

    - by disown
    I'm looking for a structured approach to long-running (hours or more) transactions. As mentioned here, these type of interactions are usually handled by optimistic locking and manual merge strategies. It would be very handy to have some more structured approach to this type of problem using standard transactions. Various long-running interactions such as user registration, order confirmation etc. all have transaction-like semantics, and it is both error-prone and tedious to invent your own fragile manual roll-back and/or time-out/clean-up strategies. Taking a RDBMS as an example, I realize that it would be a major performance cost associated with keeping all the transactions open. As an alternative, I could imagine having a database supporting two isolation levels/strategies simultaneously, one for short-running and one for long-running conversations. Long-running conversations could then for instance have more strict limitations on data access to facilitate them taking more time (read-only semantics on some data, optimistic locking semantics etc). Are there any solutions which could do something similar?

    Read the article

  • grammar parser lexer antlr letteral

    - by BB
    What's the difference between this grammar: ... if_statement : 'if' condition 'then' statement 'else' statement 'end_if'; ... and this: ... if_statement : IF condition THEN statement ELSE statement END_IF; ... IF : 'if'; THEN: 'then'; ELSE: 'else'; END_IF: 'end_if'; .... ? If there is any difference, as this impacts on performance ... Thanks

    Read the article

  • Cross-platform (microcontroller-PC) algorithm development

    - by Kyr
    Hello people! I was asked to develop a algorithm for network application on C. This project will be developed on Linux for PC and then it will be transferred to a more portable platform, something that will include a microcontroller. There are many microcontroller/companies out there that provide very nice and large libraries for TCP/IP. This software will hold statistics on the network performance. The whole idea of a cross platform (uC - PC) seems rubbish to me cause eventually the code should be written in a more platform specific way for the microcontroller, but I am not expert to judge anyway. Is there any clever way of doing this or is there a anyone that did this before? My brainstorming has "Wrapper library" and "Matlab"... Any ideas? Thx!

    Read the article

  • Concatenate boost::dynamic_bitset or std::bitset

    - by MOnsDaR
    Hey, what is the best way to concatenate 2 bitsets? For example i've got boost::dynamic_bitset<> test1( std::string("1111") ); boost::dynamic_bitset<> test2( std::string("00") ); they should be concatenated into a thrid Bitset test3 which then holds 111100 Solutions should use boost::dynamic_bitset. If the solution works with std::bitset, it would be nice too. There should be a focus on performance when concatenating the bits.

    Read the article

  • Quick question regarding CSS sprites and memory usage

    - by Andy E
    Well, it's more to do with images and memory in general. If I use the same image multiple times on a page, will each image be consolidated in memory? Or will each image use a seperate amount of memory? I'm concerned about this because I'm building a skinning system for a Windows Desktop Gadget, and I'm looking at spriting the images in the default skin so that I can keep the file system looking clean. At the same time I want to try and keep the memory footprint to a minimum. If I end up with a single file containing 100 images and re-use that image 100 times across the gadget I don't want to have performance issues. Cheers.

    Read the article

  • Vsync in Flex/Flash/AS3?

    - by oshyshko
    I work on a 2D shooter game with lots of moving objects on the screen (bullets etc). I use BitmapData.copyPixels(...) to render entire screen to a buffer:BitmapData. Then I "copyPixels" from "buffer" to screen:BitmapData. The framerate is 60. private var bitmap:Bitmap = new Bitmap(); private var buffer:Bitmap = new Bitmap(); private function start():void { addChild(bitmap); } private function onEnterFrame():void { // render into "buffer" // copy "buffer" -> "bitmap" } The problem is that the sprites are tearing apart: some part of a sprite got shifted horizontally. It looks like a PC game with VSYNC turned off. Did anyone solve this problem? UPDATE: the question is not about performance, but about getting rid of screen tearing. [!] UPDATE: I've created another question and here you may try both implementations: using Flash way or BitmapData+copyPixels()

    Read the article

  • Java Prepared Statement arguments!

    - by Epitaph
    I am planning to replace repeatedly executed Statement objects with PreparedStatement objects to improve performance. I am using arguments like the MySQL function now(), and string variables. Most of the PreparedStatement queries I have seen contained constant values (like 10, and strings like "New York") as arguments used for the "?" in the queries. How would I go about using functions like now(), and variables as arguments? Is it necessary to use the "?"s in the queries instead of actual values? I am quite confounded.

    Read the article

  • spring.net application scope repository object on loadbalanced application

    - by Bert Vandamme
    Hi, We have an application running on a loadbalanced environment, let say webserver A and B. The loadbalancing is on the HTTP level, so the loadbalancer directs each user request to one of both webservers. The scope of the repositories in the application is managed by the spring.net container, and the application relies on data that can be cached by the repository (performance reasons). In this case we can never be sure that the cached data in the repositories on both webservers is the same. Is there mechanism in spring.net that can manage this kind problem? Or is there another common approach for this kind of thing? Any ideas? Thx, Bert

    Read the article

  • In LINQ-SQL, wrap the DataContext is an using statement - pros cons

    - by hIpPy
    Can someone pitch in their opinion about pros/cons between wrapping the DataContext in an using statement or not in LINQ-SQL in terms of factors as performance, memory usage, ease of coding, right thing to do etc. Update: In one particular application, I experienced that, without wrapping the DataContext in using block, the amount of memory usage kept on increasing as the live objects were not released for GC. As in, in below example, if I hold the reference to List of q object and access entities of q, I create an object graph that is not released for GC. DataContext with using using (DBDataContext db = new DBDataContext()) { var q = from x in db.Tables where x.Id == someId select x; return q.toList(); } DataContext without using and kept alive DBDataContext db = new DBDataContext() var q = from x in db.Tables where x.Id == someId select x; return q.toList(); Thanks.

    Read the article

  • Benchmark for a .NET WinPcap wrapper

    - by brickner
    I'm developing a .NET wrapper for WinPcap called Pcap.Net. I'm trying to make sure this wrapper has high performance and I want to compare it to WinPcap and to other .net wrappers for WinPcap. The features I want to profile are: WinPcap native features (sending packets in different ways, receiving packets in different ways...) Interpreting packets that Pcap.Net knows how to interpret (like Etherent, IPv4, UDP, TCP, ICMP, ...) Building packet that Pcap.Net knows how to build (the same types it knows how to interpret). I also want to be able to profile the benchmark using Visual Studio 2010 Ultimate profiling tools. My question is: What should my benchmark exactly do to cover these issues and how would you suggest to build it?

    Read the article

  • SqlDataReader / DbDataReader implementation question

    - by Jose
    Does anyone know how DbDataReaders actually work. We can use SqlDataReader as an example. When you do the following cmd.CommandText = "SELECT * FROM Customers"; var rdr = cmd.ExecuteReader(); while(rdr.Read()) { //Do something } Does the data reader have all of the rows in memory, or does it just grab one, and then when Read is called, does it go to the db and grab the next one? It seems just bringing one into memory would be bad performance, but bringing all of them would make it take a while on the call to ExecuteReader. I know I'm the consumer of the object and it doesn't really matter how they implement it, but I'm just curious, and I think that I would probably spend a couple hours in Reflector to get an idea of what it's doing, so thought I'd ask someone that might know. I'm just curious if anyone has an idea.

    Read the article

  • A control that contains multiple duplicate properties causing deadlock issues on IIS

    - by heads5150
    I am trying to work out if the above case is true for our site. I've been told by my hosting provider that this fix (http://support.microsoft.com/kb/974165) has to applied to our server due to performance issues. It basically describes an issues where UI code like: <asp:gridview id="GridView1" runat="server" ... PageSize="100" PagerSettings-Mode="Numeric" PagerStyle-BorderStyle="None" PagerStyle-BorderColor="Navy" PagerStyle-HorizontalAlign="Right" PagerSettings-PageButtonCount="2" PagerSettings-Position="Bottom"> <PagerStyle HorizontalAlign="Left" BorderColor="Navy" BorderStyle="None"></PagerStyle> ... <PagerSettings PageButtonCount="2"></PagerSettings> ... </asp:gridview> causing the following warning on the server "ISAPI 'C:\Windows\Microsoft.NET\Framework\v2.0.50727\aspnet_isapi.dll' reported itself as unhealthy for the following reason: 'Deadlock detected'." Does anybody know of a way that I can detect this issue in the build process or the debugger? Any help would be much appreciate.

    Read the article

  • PageMethods security

    - by TenaciousImpy
    Hi, I'm trying to 'AJAX-ify' my site in order to improve the UI experience. In terms of performance, I'm also trying to get rid of the UpdatePanel. I've come across a great article over at Encosia showing a way of posting using PageMethods. My question is, how secure are page methods in a production environment? Being public, can anyone create a JSON script to POST directly to the server, or are there cross-domain checks taking place? My PageMethods would also write the data into the database (after filtering). I'm using Forms Authentication in my pages and, on page load, it redirects unauthenticated users to the login page. Would the Page Methods on this page also need to check authentication if the user POSTs directly to the method, or is that authentication inherited for the entire page? (Essentially, does the entire page cycle occur even if a user has managed to post only to the PageMethod)? Thanks

    Read the article

  • SQL datasource for gridview

    - by Karsten
    Hi I want to use a gridview with sorting and paging to display data from an SQL server, the query uses 3 joins and the full text search containstable. The from part of the query uses all 3 tables in the join. What is the best way to do this? I can think of a stored procedure, SQL directly in the SQLDataSource and creating a view in the database. I want good performance and would like to leverage the automatic sorting and paging features of the gridview as much as possible.

    Read the article

  • missing duration in iis 7.5 Failed Request Tracing on server core

    - by Phil McCracken
    We have Failed Request Tracing working on IIS7.5 (Windows 2008 Server Core) and our rule has ASP.NET checked off and verbose logging set. However, on many googled screenshots of what a typical failed request trace looks like, we see the actual duration of each subpart in milliseconds shown to the right of the word verbose on the "request details" tab. Viewing our XML in IE shows no such thing to the right of the word verbose. Furthermore, The "Performance View" tab is blank; so no help viewing the durations there either. Is there something we need to enable? What gives?

    Read the article

  • Style: Dot notation vs. message notation in Objective-C 2.0

    - by groundhog
    In Objective-C 2.0 we got the "dot" notation for properties. I've seen various back and forths about the merits of dot notation vs. message notation. To keep the responses untainted I'm not going to respond either way in the question. What is your thought about dot notation vs. message notation for property accessing? Please try to keep it focused on Objective-C - my one bias I'll put forth is that Objective-C is Objective-C, so your preference that it be like Java or JavaScript aren't valid. Valid commentary is to do with technical issues (operation ordering, cast precedence, performance, etc), clarity (structure vs. object nature, both pro and con!), succinctness, etc. Note, I'm of the school of rigorous quality and readability in code having worked on huge projects where code convention and quality is paramount (the write once read a thousand times paradigm).

    Read the article

  • MongoMapper and bson_ext problem

    - by Fossmo
    I can't get MongoMapper to work with my Rails app. I get this error message: **Notice: C extension not loaded. This is required for optimum MongoDB Ruby driver performance. You can install the extension as follows: gem install bson_ext If you continue to receive this message after installing, make sure that the bson_ext gem is in your load path and that the bson_ext and mongo gems are of the same version. I have installed DevKit and installed the gem: gem install bson_ext --no-rdoc --no-ri (result: bson_ext-1.0.1 installed) I'm running on Windows 7. The Rails version is 2.3.7. I used the RubyInstaller when installing. Can anyone point me in the right direction?

    Read the article

< Previous Page | 455 456 457 458 459 460 461 462 463 464 465 466  | Next Page >