Search Results

Search found 14924 results on 597 pages for 'selector performance'.

Page 468/597 | < Previous Page | 464 465 466 467 468 469 470 471 472 473 474 475  | Next Page >

  • IIS7 Modules - managed or native?

    - by Simon Linder
    Hi all, as the old ISAPI filters are going to die sooner or later, I want to rewrite an old ISAPI filter that was used in IIS 6 into a module for use in IIS 7. The module will be used globally, meaning it will be used within each site, on a Windows Server 2008 R2 with IIS 7.5 installed, that will host several thousand web sites and managing about 50 application pools. My question now is if I should write that module in managed or unmanaged code? One of my concerns regarding managed code is the massive memory consumption due to the .NET framework overhead. I don't know how this would effect the server's performance. I already wrote modules in managed as well as in unmanaged code. So this is not the bothering my decision. But I would prefer to write the module in C# if there are no huge drawbacks. Any suggestions about that issue?

    Read the article

  • Python: create a function to modify a list by reference not value

    - by Jonathan
    Hey all- I'm doing some performance-critical Python work and want to create a function that removes a few elements from a list if they meet certain criteria. I'd rather not create any copies of the list because it's filled with a lot of really large objects. Functionality I want to implement: def listCleanup(listOfElements): i = 0 for element in listOfElements: if(element.meetsCriteria()): del(listOfElements[i]) i += 1 return listOfElements myList = range(10000) myList = listCleanup(listOfElements) I'm not familiar with the low-level workings of Python. Is myList being passed by value or by reference? How can I make this faster? Is it possible to somehow extend the list class and implement listCleanup() within that? myList = range(10000) myList.listCleanup() Thanks- Jonathan

    Read the article

  • Need details about applications that are running on Windows Azure

    - by veda
    I have an application which requires large amount of data storage (say some PB) and computing resources. Instead of going for clusters, I am planning to propose to use Windows Azure Cloud for this application. I have gone through white papers of Windows Azure and have collected some details about Azure. But I feel that is not substantial. I need to do some case study about applications that are running on the azure and that uses azure storage efficiently. I looked for several research paper in related to performance of the applications in Windows Azure. But as Azure was quite new, I wasn't able to find any. Now, I am looking for some white papers/details regarding application that uses azure storage to substantiate my proposal. I also need to understand the windows azure storage architecture and virtual machine architecture. Do anyone know some research papers or details or blogs or something related to these topics.

    Read the article

  • [OpenGL] I'm having an issue to use GLshort for representing Vertex, and Normal.

    - by Xylopia
    As my project gets close to optimization stage, I notice that reducing Vertex Metadata could vastly improve the performance of 3D rendering. Eventually, I've dearly searched around and have found following advices from stackoverflow. Using GL_SHORT instead of GL_FLOAT in an OpenGL ES vertex array How do you represent a normal or texture coordinate using GLshorts? Advice on speeding up OpenGL ES 1.1 on the iPhone Simple experiments show that switching from "FLOAT" to "SHORT" for vertex and normal isn't tough, but what troubles me is when you're to scale back verticies to their original size (with glScalef), normals are multiplied by the reciprocal of the scale. Then how do you use "short" for both vertex and normal at the same time? I've been trying this and that for about a full day, but I could only go for "float vertex w/ byte normal" or "short vertex w/ float normal" so far. Your help would be truly appreciated.

    Read the article

  • GPU YUV to RGB. Worth the effort?

    - by Jaime Pardos
    Hello, I have to convert several full PAL videos (720x576@25) from YUV 4:2:2 to RGB, in real time, and probably a custom resize for each. I have thought of using the GPU, as I have seen some example that does just this (except that it's 4:4:4 so the bpp is the same in source and destiny)-- http://www.fourcc.org/source/YUV420P-OpenGL-GLSLang.c However, I don't have any experience with using GPU's and I'm not sure of what can be done. The example, as I understand it, just converts the video frame to YUV and displays it in the screen. Is it possible to get the processed frame instead? Would it be worth the effort to send it to the GPU, get it transformed, and sending it again to main memory, or would it kill performance? Being a bit platform-specific, assuming I work on windows, is it possible to get an OpenGL or DirectDraw surface from a window so the GPU can draw directly to it?

    Read the article

  • Is AsParallel() good practice in a web environment?

    - by Bjorn Bailleul
    I have no doubt that for client applications, AsParallel() will bring some out-of-the-box performance gains. But what if I would use it in a web environment. Let's say I have a widget framework that loops over all widgets to get their data and render output. This would parallelize great no? I do have my doubts on using AsParallel() in this scenario. What if I have a large number of visitors for my site, isn't IIS going to use multiple threads to handle all requests? Aren't there going to be locking issues presented after a while, or threads dying because all processors are in use? It's just a thought, what do you think about this?

    Read the article

  • PostgreSQL 8.3 data types: xml vs varchar

    - by Sejanus
    There's xml data type in Postgres, I never used it before so I'd like to hear opinions. Downsides and upsides vs using regular varchar (or Text) column to store xml. The text I'm going to store is xml, well-formed, UTF-8. No need to search by it (I've read searching by xml is slow). This XML actually is data prepared for PDF generation with Apache FOP. XML can be generated dynamically from data found elsewhere (other Postgres tables), it's stored as is only so that I won't need to generate it twice. Kinda backup#2 for already generated PDF documents. Anything else to know? Good practices, performance, maintenance, etc?

    Read the article

  • A web framework where AJAX was not an after thought

    - by Pirate for Profit
    AJAX is a pain in the ass because it essentially means you'll have to write two sets of similarish code: one for browsers with JavaScript enabled and those without. Not only this, but you have to connect JavaScript events to hook into your models and display the results. And if all that weren't bad enough, you need to send an address change with the request, otherwise the user won't be able to "click back" correctly (if confused look at what happens to the address bar when you click links in GMail). We're searching for something that had the foresight and design goals with all these concerns in mind. Performance and security are also obvious major concerns. We love config-based systems as well, where you don't have to write a lot of code you just drop it into an easily read config format. It's like asking for the holy grail right?

    Read the article

  • Throughput measurements

    - by dotsid
    I wrote simple load testing tool for testing performance of Java modules. One problem I faced is algorithm of throughput measurements. Tests are executed in several thread (client configure how much times test should be repeated), and execution time is logged. So, when tests are finished we have following history: 4 test executions 2 threads 36ms overall time - idle * test execution 5ms 9ms 4ms 13ms T1 |-*****-*********-****-*************-| 3ms 6ms 7ms 11ms T2 |-***-******-*******-***********-----| <-----------------36ms---------------> For the moment I calculate throughput (per second) in a following way: 1000 / overallTime * threadCount. But there is problem. What if one thread will complete it's own tests more quickly (for whatever reason): 3ms 3ms 3ms 3ms T1 |-***-***-***-***----------------| 3ms 6ms 7ms 11ms T2 |-***-******-*******-***********-| <--------------32ms--------------> In this case actual throughput is much better because of measured throughput is bounded by the most slow thread. So, my question is how should I measure throughput of code execution in multithreaded environment.

    Read the article

  • Why my application ask for a codec to pla the MVI(.MOV) video files while i can play them on WMP and QuickTime?

    - by Daniel Lip
    I have an application i did some time ago when im loading the video file its ok when trying to play/use the file im getting the messageBox message say that its need a codec to use gspot or search the internet. Wehn im playing this files on my hard disk with Windows Media Play or either QuickTime there is no problems. The Video files for example name are: MVI_2483 in the file name properties i see its type: Quick Time Movie (.MOV) In my application im using DirectShowLib-2005.dll this is the class im using in my case to extract the video file im using it in my application to extract only lightnings from the video file name. In Form1 i have a button click event that just starting the action: private void button8_Click(object sender, EventArgs e) { viewToolStripMenuItem.Enabled = false; fileToolStripMenuItem.Enabled = false; button2.Enabled = false; label14.Visible = false; label15.Visible = false; label21.Visible = false; label22.Visible = false; label24.Visible = false; label25.Visible = false; ExtractAutomatic = true; DirectoryInfo info = new DirectoryInfo(_videoFile); string dirName = info.Name; automaticModeDirectory = dirName + "_Automatic"; subDirectoryName = _outputDir + "\\" + automaticModeDirectory; if (secondPass == true) { Start(true); } Start(false); } This is the function start in Form1: private void Start(bool secondpass) { setpicture(-1); if (Directory.Exists(_outputDir) && secondpass == false) { } else { Directory.CreateDirectory(_outputDir); } if (ExtractAutomatic == true) { string subDirectory_Automatic_Name = _outputDir + "\\" + automaticModeDirectory; Directory.CreateDirectory(subDirectory_Automatic_Name); f = new WmvAdapter(_videoFile, Path.Combine(subDirectory_Automatic_Name)); } else { string subDirectory_Manual_Name; if (Directory.Exists(subDirectoryName)) { subDirectory_Manual_Name = subDirectoryName; f = new WmvAdapter(_videoFile, Path.Combine(subDirectory_Manual_Name)); } else { subDirectory_Manual_Name = _outputDir + "\\" + averagesListTextFileDirectory + "_Manual"; Directory.CreateDirectory(subDirectory_Manual_Name); f = new WmvAdapter(_videoFile, Path.Combine(subDirectory_Manual_Name)); } } button1.Enabled = false; f.Secondpass = secondpass; f.FramesToSave = _fts; f.FrameCountAvailable += new WmvAdapter.FrameCountEventHandler(f_FrameCountAvailable); f.StatusChanged += new WmvAdapter.EventHandler(f_StatusChanged); f.ProgressChanged += new WmvAdapter.ProgressEventHandler(f_ProgressChanged); this.Text = "Processing Please Wait..."; label5.ForeColor = Color.Green; label5.Text = "Processing Please Wait"; button8.Enabled = false; button5.Enabled = false; label5.Visible = true; pictureBox1.Image = Lightnings_Extractor.Properties.Resources.Weather_Michmoret; Hrs = 0; //number of hours Min = 0; //number of Minutes Sec = 0; //number of Sec timeElapsed = 0; label10.Text = "00:00:00"; label11.Visible = false; label12.Visible = false; label9.Visible = false; label8.Visible = false; this.button1.Enabled = false; myTrackPanelss1.trackBar1.Enabled = false; this.checkBox2.Enabled = false; this.checkBox1.Enabled = false; numericUpDown1.Enabled = false; timer1.Start(); label2.Text = ""; label1.Visible = true; label2.Visible = true; label3.Visible = true; label4.Visible = true; f.Start(); } And this is the class wich is not my oqn class i just just defined it in some places wich making the problem: using System; using System.Diagnostics; using System.Drawing; using System.Drawing.Imaging; using System.IO; using System.Runtime.InteropServices; using DirectShowLib; using System.Collections.Generic; using Extracting_Frames; using System.Windows.Forms; namespace Polkan.DataSource { internal class WmvAdapter : ISampleGrabberCB, IDisposable { #region Fields_Properties_and_Events bool dis = false; int count = 0; const string fileName = @"d:\histogramValues.dat"; private IFilterGraph2 _filterGraph; private IMediaControl _mediaCtrl; private IMediaEvent _mediaEvent; private int _width; private int _height; private readonly string _outFolder; private int _frameId; //better use a custom EventHandler that passes the results of the action to the subscriber. public delegate void EventHandler(object sender, EventArgs e); public event EventHandler StatusChanged; public delegate void FrameCountEventHandler(object sender, FrameCountEventArgs e); public event FrameCountEventHandler FrameCountAvailable; public delegate void ProgressEventHandler(object sender, ProgressEventArgs e); public event ProgressEventHandler ProgressChanged; private IMediaSeeking _mSeek; private long _duration = 0; private long _avgFrameTime = 0; //just save the averages to a List (not to fs) public List<double> AveragesList { get; set; } public List<long> histogramValuesList; public bool Secondpass { get; set; } public List<int> FramesToSave { get; set; } #endregion #region Constructors and Destructors public WmvAdapter(string file, string outFolder) { _outFolder = outFolder; try { SetupGraph(file); } catch { Dispose(); MessageBox.Show("A codec is required to load this video file. Please use http://www.headbands.com/gspot/ or search the web for the correct codec"); } } ~WmvAdapter() { CloseInterfaces(); } #endregion public void Dispose() { CloseInterfaces(); } public void Start() { EstimateFrameCount(); int hr = _mediaCtrl.Run(); WaitUntilDone(); DsError.ThrowExceptionForHR(hr); } public void WaitUntilDone() { int hr; const int eAbort = unchecked((int)0x80004004); do { System.Windows.Forms.Application.DoEvents(); EventCode evCode; if (dis == true) { return; } hr = _mediaEvent.WaitForCompletion(100, out evCode); }while (hr == eAbort); DsError.ThrowExceptionForHR(hr); OnStatusChanged(); } //Edit: added events protected virtual void OnStatusChanged() { if (StatusChanged != null) StatusChanged(this, new EventArgs()); } protected virtual void OnFrameCountAvailable(long frameCount) { if (FrameCountAvailable != null) FrameCountAvailable(this, new FrameCountEventArgs() { FrameCount = frameCount }); } protected virtual void OnProgressChanged(int frameID) { if (ProgressChanged != null) ProgressChanged(this, new ProgressEventArgs() { FrameID = frameID }); } /// <summary> build the capture graph for grabber. </summary> private void SetupGraph(string file) { ISampleGrabber sampGrabber = null; IBaseFilter capFilter = null; IBaseFilter nullrenderer = null; _filterGraph = (IFilterGraph2)new FilterGraph(); _mediaCtrl = (IMediaControl)_filterGraph; _mediaEvent = (IMediaEvent)_filterGraph; _mSeek = (IMediaSeeking)_filterGraph; var mediaFilt = (IMediaFilter)_filterGraph; try { // Add the video source int hr = _filterGraph.AddSourceFilter(file, "Ds.NET FileFilter", out capFilter); DsError.ThrowExceptionForHR(hr); // Get the SampleGrabber interface sampGrabber = new SampleGrabber() as ISampleGrabber; var baseGrabFlt = sampGrabber as IBaseFilter; ConfigureSampleGrabber(sampGrabber); // Add the frame grabber to the graph hr = _filterGraph.AddFilter(baseGrabFlt, "Ds.NET Grabber"); DsError.ThrowExceptionForHR(hr); // --------------------------------- // Connect the file filter to the sample grabber // Hopefully this will be the video pin, we could check by reading it's mediatype IPin iPinOut = DsFindPin.ByDirection(capFilter, PinDirection.Output, 0); // Get the input pin from the sample grabber IPin iPinIn = DsFindPin.ByDirection(baseGrabFlt, PinDirection.Input, 0); hr = _filterGraph.Connect(iPinOut, iPinIn); DsError.ThrowExceptionForHR(hr); // Add the null renderer to the graph nullrenderer = new NullRenderer() as IBaseFilter; hr = _filterGraph.AddFilter(nullrenderer, "Null renderer"); DsError.ThrowExceptionForHR(hr); // --------------------------------- // Connect the sample grabber to the null renderer iPinOut = DsFindPin.ByDirection(baseGrabFlt, PinDirection.Output, 0); iPinIn = DsFindPin.ByDirection(nullrenderer, PinDirection.Input, 0); hr = _filterGraph.Connect(iPinOut, iPinIn); DsError.ThrowExceptionForHR(hr); // Turn off the clock. This causes the frames to be sent // thru the graph as fast as possible hr = mediaFilt.SetSyncSource(null); DsError.ThrowExceptionForHR(hr); // Read and cache the image sizes SaveSizeInfo(sampGrabber); //Edit: get the duration hr = _mSeek.GetDuration(out _duration); DsError.ThrowExceptionForHR(hr); } finally { if (capFilter != null) { Marshal.ReleaseComObject(capFilter); } if (sampGrabber != null) { Marshal.ReleaseComObject(sampGrabber); } if (nullrenderer != null) { Marshal.ReleaseComObject(nullrenderer); } GC.Collect(); } } private void EstimateFrameCount() { try { //1sec / averageFrameTime double fr = 10000000.0 / _avgFrameTime; double frameCount = fr * (_duration / 10000000.0); OnFrameCountAvailable((long)frameCount); } catch { } } public double framesCounts() { double fr = 10000000.0 / _avgFrameTime; double frameCount = fr * (_duration / 10000000.0); return frameCount; } private void SaveSizeInfo(ISampleGrabber sampGrabber) { // Get the media type from the SampleGrabber var media = new AMMediaType(); int hr = sampGrabber.GetConnectedMediaType(media); DsError.ThrowExceptionForHR(hr); if ((media.formatType != FormatType.VideoInfo) || (media.formatPtr == IntPtr.Zero)) { throw new NotSupportedException("Unknown Grabber Media Format"); } // Grab the size info var videoInfoHeader = (VideoInfoHeader)Marshal.PtrToStructure(media.formatPtr, typeof(VideoInfoHeader)); _width = videoInfoHeader.BmiHeader.Width; _height = videoInfoHeader.BmiHeader.Height; //Edit: get framerate _avgFrameTime = videoInfoHeader.AvgTimePerFrame; DsUtils.FreeAMMediaType(media); GC.Collect(); } private void ConfigureSampleGrabber(ISampleGrabber sampGrabber) { var media = new AMMediaType { majorType = MediaType.Video, subType = MediaSubType.RGB24, formatType = FormatType.VideoInfo }; int hr = sampGrabber.SetMediaType(media); DsError.ThrowExceptionForHR(hr); DsUtils.FreeAMMediaType(media); GC.Collect(); hr = sampGrabber.SetCallback(this, 1); DsError.ThrowExceptionForHR(hr); } private void CloseInterfaces() { try { if (_mediaCtrl != null) { _mediaCtrl.Stop(); _mediaCtrl = null; dis = true; } } catch (Exception ex) { Debug.WriteLine(ex); } if (_filterGraph != null) { Marshal.ReleaseComObject(_filterGraph); _filterGraph = null; } GC.Collect(); } int ISampleGrabberCB.SampleCB(double sampleTime, IMediaSample pSample) { Marshal.ReleaseComObject(pSample); return 0; } int ISampleGrabberCB.BufferCB(double sampleTime, IntPtr pBuffer, int bufferLen) { if (Form1.ExtractAutomatic == true) { using (var bitmap = new Bitmap(_width, _height, _width * 3, PixelFormat.Format24bppRgb, pBuffer)) { if (!this.Secondpass) { long[] HistogramValues = Form1.GetHistogram(bitmap); long t = Form1.GetTopLumAmount(HistogramValues, 1000); Form1.averagesTest.Add(t); } else { //this is the changed part if (_frameId > 0) { if (Form1.averagesTest[_frameId] / 1000.0 - Form1.averagesTest[_frameId - 1] / 1000.0 > 150.0) { count = 6; } if (count > 0) { bitmap.RotateFlip(RotateFlipType.Rotate180FlipX); bitmap.Save(Path.Combine(_outFolder, _frameId.ToString("D6") + ".bmp")); count --; } } } _frameId++; //let only report each 100 frames for performance if (_frameId % 100 == 0) OnProgressChanged(_frameId); } } else { using (var bitmap = new Bitmap(_width, _height, _width * 3, PixelFormat.Format24bppRgb, pBuffer)) { if (!this.Secondpass) { //get avg double average = GetAveragePixelValue(bitmap); if (AveragesList == null) AveragesList = new List<double>(); //save avg AveragesList.Add(average); //***************************\\ // for (int i = 0; i < (int)framesCounts(); i++) // { // get histogram values long[] HistogramValues = Form1.GetHistogram(bitmap); if (histogramValuesList == null) histogramValuesList = new List<long>(256); histogramValuesList.AddRange(HistogramValues); //***************************\\ //} } else { if (FramesToSave != null && FramesToSave.Contains(_frameId)) { bitmap.RotateFlip(RotateFlipType.Rotate180FlipX); bitmap.Save(Path.Combine(_outFolder, _frameId.ToString("D6") + ".bmp")); // get histogram values long[] HistogramValues = Form1.GetHistogram(bitmap); if (histogramValuesList == null) histogramValuesList = new List<long>(256); histogramValuesList.AddRange(HistogramValues); using (BinaryWriter binWriter = new BinaryWriter(File.Open(fileName, FileMode.Create))) { for (int i = 0; i < histogramValuesList.Count; i++) { binWriter.Write(histogramValuesList[(int)i]); } binWriter.Close(); } } } _frameId++; //let only report each 100 frames for performance if (_frameId % 100 == 0) OnProgressChanged(_frameId); } } return 0; } /* int ISampleGrabberCB.SampleCB(double sampleTime, IMediaSample pSample) { Marshal.ReleaseComObject(pSample); return 0; } int ISampleGrabberCB.BufferCB(double sampleTime, IntPtr pBuffer, int bufferLen) { using (var bitmap = new Bitmap(_width, _height, _width * 3, PixelFormat.Format24bppRgb, pBuffer)) { if (!this.Secondpass) { //get avg double average = GetAveragePixelValue(bitmap); if (AveragesList == null) AveragesList = new List<double>(); //save avg AveragesList.Add(average); //***************************\\ // for (int i = 0; i < (int)framesCounts(); i++) // { // get histogram values long[] HistogramValues = Form1.GetHistogram(bitmap); if (histogramValuesList == null) histogramValuesList = new List<long>(256); histogramValuesList.AddRange(HistogramValues); long t = Form1.GetTopLumAmount(HistogramValues, 1000); //***************************\\ Form1.averagesTest.Add(t); // to add this list to a text file or binary file and read the averages from the file when its is Secondpass !!!!! //} } else { if (FramesToSave != null && FramesToSave.Contains(_frameId)) { bitmap.RotateFlip(RotateFlipType.Rotate180FlipX); bitmap.Save(Path.Combine(_outFolder, _frameId.ToString("D6") + ".bmp")); // get histogram values long[] HistogramValues = Form1.GetHistogram(bitmap); if (histogramValuesList == null) histogramValuesList = new List<long>(256); histogramValuesList.AddRange(HistogramValues); using (BinaryWriter binWriter = new BinaryWriter(File.Open(fileName, FileMode.Create))) { for (int i = 0; i < histogramValuesList.Count; i++) { binWriter.Write(histogramValuesList[(int)i]); } binWriter.Close(); } } for (int x = 1; x < Form1.averagesTest.Count; x++) { double fff = Form1.averagesTest[x] / 1000.0 - Form1.averagesTest[x - 1] / 1000.0; if (Form1.averagesTest[x] / 1000.0 - Form1.averagesTest[x - 1] / 1000.0 > 180.0) { bitmap.RotateFlip(RotateFlipType.Rotate180FlipX); bitmap.Save(Path.Combine(_outFolder, _frameId.ToString("D6") + ".bmp")); _frameId++; } } } _frameId++; //let only report each 100 frames for performance if (_frameId % 100 == 0) OnProgressChanged(_frameId); } return 0; }*/ private unsafe double GetAveragePixelValue(Bitmap bmp) { BitmapData bmData = null; try { bmData = bmp.LockBits(new Rectangle(0, 0, bmp.Width, bmp.Height), ImageLockMode.ReadOnly, PixelFormat.Format24bppRgb); int stride = bmData.Stride; IntPtr scan0 = bmData.Scan0; int w = bmData.Width; int h = bmData.Height; double sum = 0; long pixels = bmp.Width * bmp.Height; byte* p = (byte*)scan0.ToPointer(); for (int y = 0; y < h; y++) { p = (byte*)scan0.ToPointer(); p += y * stride; for (int x = 0; x < w; x++) { double i = ((double)p[0] + p[1] + p[2]) / 3.0; sum += i; p += 3; } //no offset incrementation needed when getting //the pointer at the start of each row } bmp.UnlockBits(bmData); double result = sum / (double)pixels; return result; } catch { try { bmp.UnlockBits(bmData); } catch { } } return -1; } } public class FrameCountEventArgs { public long FrameCount { get; set; } } public class ProgressEventArgs { public int FrameID { get; set; } } } I remember i had this codec problem/s before and i installed the codec/'s that were needed but in this case both quick time and windows media player can play the video files so why the application cant detect and find the codec/'s on my computer ? Gspot say that the codec is AVC1 but again wmp and quicktime play the video files no problems. The video files are from my digital camera !

    Read the article

  • Best practices to build a highly configurable software product.

    - by Kabeer
    Hello. I am working on a software product that can substantially change behavior based on the configuration & meta-data supplied. I would like to know best practices to architect / build a highly configurable software product. Considering that there are substantial number of configuration parameters, I'd like to look at something that will not affect the performance before I look at dependency injection. My platform is .Net ... I seek recommendations on architecture / design and implementations fronts.

    Read the article

  • Are we asking too much of transactional memory?

    - by Carl Seleborg
    I've been reading up a lot about transactional memory lately. There is a bit of hype around TM, so a lot of people are enthusiastic about it, and it does provide solutions for painful problems with locking, but you regularly also see complaints: You can't do I/O You have to write your atomic sections so they can run several times (be careful with your local variables!) Software transactional memory offers poor performance [Insert your pet peeve here] I understand these concerns: more often than not, you find articles about STMs that only run on some particular hardware that supports some really nifty atomic operation (like LL/SC), or it has to be supported by some imaginary compiler, or it requires that all accesses to memory be transactional, it introduces type constraints monad-style, etc. And above all: these are real problems. This has lead me to ask myself: what speaks against local use of transactional memory as a replacement for locks? Would this already bring enough value, or must transactional memory be used all over the place if used at all?

    Read the article

  • Parametrize the WHERE clause?

    - by ControlFlow
    Hi, stackoverflow! I'm need to write an stored procedure for SQL Server 2008 for performing some huge select query and I need filter it results with specifying filtering type via procedure's parameters (parameterize where clause). I found some solutions like this: create table Foo( id bigint, code char, name nvarchar(max)) go insert into Foo values (1,'a','aaa'), (2,'b','bbb'), (3,'c','ccc') go create procedure Bar @FilterType nvarchar(max), @FilterValue nvarchar(max) as begin select * from Foo as f where case @FilterType when 'by_id' then f.id when 'by_code' then f.code when 'by_name' then f.name end = case @FilterType when 'by_id' then cast(@FilterValue as bigint) when 'by_code' then cast(@FilterValue as char) when 'by_name' then @FilterValue end end go exec Bar 'by_id', '1'; exec Bar 'by_code', 'b'; exec Bar 'by_name', 'ccc'; But it doesn't work when the columns has different data types... It's possible to cast all the columns to nvarchar(max) and compare they as strings, but I think it will cause a performance degradation... Is it possible to parameterize where clause in stored procedure without using things like EXEC sp_executesql (dynamic SQL and etc.)?

    Read the article

  • How to optimize MATLAB loops?

    - by striglia
    I have been working lately on a number of iterative algorithms in MATLAB, and been getting hit hard by MATLAB's performance (or lack thereof) when it comes to loops. I'm aware of the benefit of vectorizing code when possible, but are there any tools for optimization when you need the loop for your algorithm? I am aware of the MEX-file option to write small subroutines in C/C++, although given my algorithms, this can be a very painful option given the data structures required. I mainly use MATLAB for the simplicity and speed of prototyping, so a syntactically complex, statically typed language is not ideal for my situation. Are there any other suggestions? Even other languages (python?) which have relatively painless matrix tools are an option.

    Read the article

  • Using different languages in one project

    - by Tarbal
    I recently heard about the use of several different languages in a (big) project, I also read about famous services such as Twitter using Rails as frontend, mixed with some other languages, and Scala I think it was as backend. Is this common practice? Who does that? I'm sure there are disadvantages to this. I think that you will have problems with the different interpreters/compilers and seamlessly connecting the different languages. Is this true? Why is this actually done? For performance?

    Read the article

  • send form data as web service in symfony

    - by Lars
    I am making a restrictive portal to a WiFi network using symfony, and I want to send a form as web service to other sites that want to use this portal. How should I solve this? I realize I could go the SOAP/WSDL route, but since symfony is already RESTful, it seems to me I could go the RESTful route with considerably less pain and loss of performance. Right now, I have a working form, but I've only made a casual attempt to bring the form to a remote site by using cURL. The form does not work remotely since the routing is not set up correctly (I think). Can someone help me with this? Thanks.

    Read the article

  • C# Threading vs single thread

    - by user177883
    Is it always guaranteed that a multi-threaded application would run faster than a single threaded application? I have two threads that populates data from a data source but different entities (eg: database, from two different tables), seems like single threaded version of the application is running faster than the version with two threads. Why would the reason be? when i look at the performance monitor, both cpu s are very spikey ? is this due to context switching? what are the best practices to jack the CPU and fully utilize it? I hope this is not ambiguous.

    Read the article

  • which is the best for build a Gantt-chart like application Flex or Flash Action Script?

    - by coderex
    Which is the best for build an application show like in image. For the better graphics and performance wise. I want to know Flash for Flex is more suitable for this. In Flash ca we be able to build a grid kind of applications easily? I headed that Flex have the functions to handle grid and other things. Drag & scroll Zoom like feature. but am not sure able the graphics of flex. Is the components are customizable.. Please help me.

    Read the article

  • Whats faster in Javascript a bunch of small setInterval loops, or one big one?

    - by RobertWHurst
    Just wondering if its worth it to make a monolithic loop function or just add loops were they're needed. The big loop option would just be a loop of callbacks that are added dynamically with an add function. adding a function would look like this setLoop(function(){ alert('hahaha! I\'m a really annoying loop that bugs you every tenth of a second'); }); setLoop would add the function to the monolithic loop. so is the is worth anything in performance or should I just stick to lots of little loops using setInterval?

    Read the article

  • How to create a delayed queue in RabbitMQ?

    - by eandersson
    What is the easiest way to create a delay (or parking) queue with Python, Pika and RabbitMQ? I have seen an similar questions, but none for Python. I find this an useful idea when designing applications, as it allows us to throttle messages that needs to be re-queued again. There are always the possibility that you will receive more messages than you can handle, maybe the HTTP server is slow, or the database is under too much stress. I also found it very useful when something went wrong in scenarios where there is a zero tolerance to losing messages, and while re-queuing messages that could not be handled may solve that. It can also cause problems where the message will be queued over and over again. Potentially causing performance issues, and log spam.

    Read the article

  • Rails Unit Testing with MyISAM Tables

    - by tadman
    I've got an application that requires the use of MyISAM on a few tables, but the rest are the traditional InnoDB type. The application itself is not concerned with transactions where it applies to these records, but performance is a concern. The Rails testing environment assumes the engine used is transactional, though, so when the test database is generated from the schema.rb it is imported with the same engine. Is it possible to over-ride this behaviour in a simple manner? I've resorted to an awful hack to ensure the tables are the correct type by appending this to test_helper.rb: (ActiveRecord::Base.connection.select_values("SHOW TABLES") - %w[ schema_info ]).each do |table_name| ActiveRecord::Base.connection.execute("ALTER TABLE `#{table_name}` ENGINE=InnoDB") end Is there a better way to make a MyISAM-backed model be testable?

    Read the article

  • Using PHP to place database rows into an array?

    - by Hamed Szilazi
    I was just wondering how i would be able to code perform an SQL query and then place each row into a new array, for example, lets say a table looked like the following: $people= mysql_query("SELECT * FROM friends") Output: | ID | Name | Age | --1----tom----32 --2----dan----22 --3----pat----52 --4----nik----32 --5----dre----65 How could i create a multidimensional array that works in the following way, the first rows second column data could be accessed using $people[0][1] and fifth rows third column could be accessed using $people[4][2]. How would i go about constructing this type of array? Sorry if this is a strange question, its just that i am new to PHP+SQL and would like to know how to directly access data. Performance and speed is not a issue as i am just writing small test scripts to get to grips with the language.

    Read the article

  • Clustering and DB Replication in virtualized (and cloud) environments

    - by devdude
    Both replication and clustering are terms for servers setups with physical (real) servers, usually implemented on a DB or AS level. Now the question: In a virtualized environment with "easy" scalable servers (touching clustering) and a higher availability (DB replication) through the means of high availability of the virtual server by the cloudserver provider, do we really need replication and clustering (as in covering the problems of traditional servers) ? Question is asked from a soultion/application provider viewpoint. Please exclude the need of replication with a business requirement background, eg. the need to replicate a DB at 2 different geographical locations to ensure performance and data. Thanks for your insights !

    Read the article

  • Graph visualization code in javascript?

    - by Chris Farmer
    Hi. I have a data structure that represents a directed graph, and I want to render that dynamically on an HTML page. Does anyone know of any javascript code that can do a reasonable job with graph layout? These graphs will usually be just a few nodes, maybe ten at the very upper end, so my guess is that performance isn't going to be a big deal. Ideally, I'd like to be able to hook it in with jQuery so that users can tweak the layout manually by dragging the nodes around. Edit: Google's Visualization API seems to be more "graphs as charts" oriented than "graphs as nodes" oriented. I didn't see any node-oriented visualizations already built there, anyway. Do you know that one exists?

    Read the article

  • Is it really wrong to version documents using CouchDB's default behaviour?

    - by Tomas Sedovic
    This is one of those "I know I shouldn't do this but it's oh so convenient." questions. Sorry about that. I plan to use CouchDB for storing a bunch of documents and keeping their entire revision history. CouchDB does the versioning automatically, but it is strongly discouraged for programmer's use: "You cannot rely on document revisions for any other purpose than concurrency control." From what I've found on the CouchDB wiki, the versions can get deleted either during compaction or during replication. As far as I can tell, Compaction must always be triggered manually and Replication occurs only when there's more than one database server. The question is: if I won't run compaction and will use only single database instance for my documents, can I just use CouchDB's document versioning and expect it to work? What other problems I might run into? E.g. does not running compaction hurt the performance or consume significantly more disk space (than if I did handle the versioning manually)?

    Read the article

< Previous Page | 464 465 466 467 468 469 470 471 472 473 474 475  | Next Page >