Search Results

Search found 70827 results on 2834 pages for 'data quality services'.

Page 185/2834 | < Previous Page | 181 182 183 184 185 186 187 188 189 190 191 192  | Next Page >

  • How to bring coordination between file system and database?

    - by Lock up
    I am working on a online file management project. We are storing references on the database (sql server) and files data on the on file system. We are facing a problem of coordination between file system and database while we are uploading a file and also in case of deleting a file. First we create a reference in the database or store files on file system. The problem is that if I create a reference in the database first and then store a file on file system, but while storing files on the file system any type of error occur, then the reference for that file is created in the database but no file data exist on the file system. Please give me some solution how to deal with such situation. I am badly in need of it. This case happens also while we deleting a file?

    Read the article

  • Saving core data in a thread, how to ensure its done writing before quitting?

    - by Shizam
    So I'm saving small images to core data which take a really short amount of time to save, like .2 seconds but I'm doing it while the user is flipping through a scroll view so in order to improve responsiveness I'm moving the saving to a thread. This works great, everything gets saved and the app is responsive. However, there is one thing in the core-data + multithreading doco that worries me: "In Cocoa, only the main thread is not-detached. If you need to save on other threads, you must write additional code such that the main thread prevents the application from quitting until all the save operation is complete." Ok, how do you do that? It only needs to last ~ .2 seconds and its rarely going to happen since the chance of the app quitting as something is saving is very low. How do I run something on the main thread that'll prevent the app from quitting AND not block the gui? Thanks

    Read the article

  • Can I add a custom method to Core Data-generated classes?

    - by Andy
    I've got a couple of Core Data-generated class files that I'd like to add custom methods to. I don't need to add any instance variables. How can I do this? I tried adding a category of methods: // ContactMethods.h (my category on Core Data-generated "Contact" class) #import "Contact.h" @interface Contact (ContactMethods) -(NSString*)displayName; @end ... // ContactMethods.m #import "ContactMethods.h" @implementation Contact (ContactMethods) -(NSString*)displayName { return @"Some Name"; // this is test code } @end This doesn't work, though. I get a compiler message that "-NSManagedObject may not respond to 'displayName' " and sure enough, when I run the app, I don't get "Some Name" where I should be seeing it.

    Read the article

  • iOS Development: Can I store an array of integers in a Core Data object without creating a new table to represent the array?

    - by BeachRunnerJoe
    Hello. I'm using Core Data and I'm trying to figure out the simplest way to store an array of integers in one of my Core Data entities. Currently, my entities contain various arrays of objects that are more complex than a single number, so it makes sense to represent those arrays as tables in my DB and attach them using relationships. If I want to store a simple array of integers, do I need to create a new table with a single column and attach it using a one-to-many relationship? Or is there a more simple way? Thanks in advance for your wisdom!

    Read the article

  • iphone application development -- passing data to and from the server.

    - by SAPNA
    i have to develop an -phone application .user logs in through the i-phone and gets data stored in the database.our database is created in MY SQL. and website is developed in (classic) ASP.interface is created in i-phone SDK. connection is remaining.what should i use for transferring data to server and from the server. JSON or SOAP.is XML parsing necessary.actually i am very new to this field. so a bit confused. we have some time left for completing our application.so in urgent need of help. thank you in advance.

    Read the article

  • How to use Joomla to allow users to create/update data on my site?

    - by gromo
    Right now Im using an extension called chronoforms but Im open to anything that works. I can make forms just fine, and it saves the submitted data to a table. Where I am stuck is, how do I then allow my front end users who filled out and submitted the form to go back, view their old answers, change them, and resubmit them or resave them. In order to do this I think I would need to be able to retrieve their last answers from the table on which it is stored, replug the last specific data record for that user only back into the original form on the front end and let the user change and resubmit it? Unless there is a better way to allow the user to change their answers to a form and resave them? If anyone knows how to do this, or if there is a better way I should be going about doing this, (perhaps there is another extension that handles this kind of thing) please let me know. Thanks!

    Read the article

  • RiverTrail - JavaScript GPPGU Data Parallelism

    - by JoshReuben
    Where is WebCL ? The Khronos WebCL working group is working on a JavaScript binding to the OpenCL standard so that HTML 5 compliant browsers can host GPGPU web apps – e.g. for image processing or physics for WebGL games - http://www.khronos.org/webcl/ . While Nokia & Samsung have some protype WebCL APIs, Intel has one-upped them with a higher level of abstraction: RiverTrail. Intro to RiverTrail Intel Labs JavaScript RiverTrail provides GPU accelerated SIMD data-parallelism in web applications via a familiar JavaScript programming paradigm. It extends JavaScript with simple deterministic data-parallel constructs that are translated at runtime into a low-level hardware abstraction layer. With its high-level JS API, programmers do not have to learn a new language or explicitly manage threads, orchestrate shared data synchronization or scheduling. It has been proposed as a draft specification to ECMA a (known as ECMA strawman). RiverTrail runs in all popular browsers (except I.E. of course). To get started, download a prebuilt version https://github.com/downloads/RiverTrail/RiverTrail/rivertrail-0.17.xpi , install Intel's OpenCL SDK http://www.intel.com/go/opencl and try out the interactive River Trail shell http://rivertrail.github.com/interactive For a video overview, see  http://www.youtube.com/watch?v=jueg6zB5XaM . ParallelArray the ParallelArray type is the central component of this API & is a JS object that contains ordered collections of scalars – i.e. multidimensional uniform arrays. A shape property describes the dimensionality and size– e.g. a 2D RGBA image will have shape [height, width, 4]. ParallelArrays are immutable & fluent – they are manipulated by invoking methods on them which produce new ParallelArray objects. ParallelArray supports several constructors over arrays, functions & even the canvas. // Create an empty Parallel Array var pa = new ParallelArray(); // pa0 = <>   // Create a ParallelArray out of a nested JS array. // Note that the inner arrays are also ParallelArrays var pa = new ParallelArray([ [0,1], [2,3], [4,5] ]); // pa1 = <<0,1>, <2,3>, <4.5>>   // Create a two-dimensional ParallelArray with shape [3, 2] using the comprehension constructor var pa = new ParallelArray([3, 2], function(iv){return iv[0] * iv[1];}); // pa7 = <<0,0>, <0,1>, <0,2>>   // Create a ParallelArray from canvas.  This creates a PA with shape [w, h, 4], var pa = new ParallelArray(canvas); // pa8 = CanvasPixelArray   ParallelArray exposes fluent API functions that take an elemental JS function for data manipulation: map, combine, scan, filter, and scatter that return a new ParallelArray. Other functions are scalar - reduce  returns a scalar value & get returns the value located at a given index. The onus is on the developer to ensure that the elemental function does not defeat data parallelization optimization (avoid global var manipulation, recursion). For reduce & scan, order is not guaranteed - the onus is on the dev to provide an elemental function that is commutative and associative so that scan will be deterministic – E.g. Sum is associative, but Avg is not. map Applies a provided elemental function to each element of the source array and stores the result in the corresponding position in the result array. The map method is shape preserving & index free - can not inspect neighboring values. // Adding one to each element. var source = new ParallelArray([1,2,3,4,5]); var plusOne = source.map(function inc(v) {     return v+1; }); //<2,3,4,5,6> combine Combine is similar to map, except an index is provided. This allows elemental functions to access elements from the source array relative to the one at the current index position. While the map method operates on the outermost dimension only, combine, can choose how deep to traverse - it provides a depth argument to specify the number of dimensions it iterates over. The elemental function of combine accesses the source array & the current index within it - element is computed by calling the get method of the source ParallelArray object with index i as argument. It requires more code but is more expressive. var source = new ParallelArray([1,2,3,4,5]); var plusOne = source.combine(function inc(i) { return this.get(i)+1; }); reduce reduces the elements from an array to a single scalar result – e.g. Sum. // Calculate the sum of the elements var source = new ParallelArray([1,2,3,4,5]); var sum = source.reduce(function plus(a,b) { return a+b; }); scan Like reduce, but stores the intermediate results – return a ParallelArray whose ith elements is the results of using the elemental function to reduce the elements between 0 and I in the original ParallelArray. // do a partial sum var source = new ParallelArray([1,2,3,4,5]); var psum = source.scan(function plus(a,b) { return a+b; }); //<1, 3, 6, 10, 15> scatter a reordering function - specify for a certain source index where it should be stored in the result array. An optional conflict function can prevent an exception if two source values are assigned the same position of the result: var source = new ParallelArray([1,2,3,4,5]); var reorder = source.scatter([4,0,3,1,2]); // <2, 4, 5, 3, 1> // if there is a conflict use the max. use 33 as a default value. var reorder = source.scatter([4,0,3,4,2], 33, function max(a, b) {return a>b?a:b; }); //<2, 33, 5, 3, 4> filter // filter out values that are not even var source = new ParallelArray([1,2,3,4,5]); var even = source.filter(function even(iv) { return (this.get(iv) % 2) == 0; }); // <2,4> Flatten used to collapse the outer dimensions of an array into a single dimension. pa = new ParallelArray([ [1,2], [3,4] ]); // <<1,2>,<3,4>> pa.flatten(); // <1,2,3,4> Partition used to restore the original shape of the array. var pa = new ParallelArray([1,2,3,4]); // <1,2,3,4> pa.partition(2); // <<1,2>,<3,4>> Get return value found at the indices or undefined if no such value exists. var pa = new ParallelArray([0,1,2,3,4], [10,11,12,13,14], [20,21,22,23,24]) pa.get([1,1]); // 11 pa.get([1]); // <10,11,12,13,14>

    Read the article

  • PXE-E32 TFTP Open Timeout While Attempting to PXE Boot from Windows Deployment Services

    - by bschafer
    I'm running Windows Deployment Services on Windows Server 2008 R2 on top of an ESX 4.0 box. This is the only function of this VM instance, although it had previously functioned as an AD Domain Controller. My DHCP server is running on our primary Domain Controller, which is also Server 2008 R2, but running on metal. Everything was working perfectly until we recently had our backup generator fail during a power outage, causing all of our servers and networking equipment to lose power for a period of time. When we brought all of our equipment back up, everything was working as expected except for WDS. Our network is split up into several different vlans. Now, depending on which vlan the client computer is on, it's behaving differently when attempting to PXE boot into WDS. Our servers are located on the 10.55.x.x vlan, which, due to the nature of it, has no DHCP server active in it. The first computer we plugged in happened to be in the 10.99.x.x vlan, which is supposed to be reserved for network management devices (i.e. switches), but we've been using it occasionally otherwise. That computer gave us PXE-E11 ARP Timeout errors. When we moved to a different computer on the 10.19.x.x vlan (for general purpose use), it finally gets an IP from DHCP, but it presents us with a very stumping PXE-E32 TFTP Open Timeout error. Before the power outage, it didn't matter which vlan a device was on; it would PXE boot and image just fine. I've made no changes to anything server-side. Everything is configured exactly the same way it was on my WDS and DHCP servers as before the power outage. I've tried several different computers, including different models. All of this, combined with the quirky behavior depending on the vlan, makes me think something went wrong in one or more of our switches, probably because of the power outage. Unfortunately, I'm no network guy, and I know very little about how to configure our switches properly. Is this an issue with switches, etc? If so, how can I fix it? Is there some magical option I'm not aware of? Does anybody out there have any hunches? I've pretty much exhausted my ideas. Our main switch is an HP Procurve 5406. We also have 3x HP Procurve 4208 switches. The ESX Server is an HP ProLiant DL380 G6. The WDS VM is currently using the VMXNET3 network adaptor, but we've also tried the E1000 adaptor.

    Read the article

  • How do I disable location services system wide?

    - by Daisetsu
    Google has an API which can determine someone's location based on the wifi router names which a user's computer can see. You will see this if you go to google maps and your browser may ask if you would like to share location data. I am wondering if there is any way to disable this on a system wide setting rather than just in each browser (Chrome can do this too). Is there any way I can limit which applications have a list of the wireless routers I can see?

    Read the article

  • Looking for app to work fluidly with CSV data in graph form

    - by Aszurom
    It often occurs to me that if I had a good tool for viewing CSV data in graphical format, and comparing two sets of numbers to each other, I could do a great deal of meaningful trend watching and data interpretation. For example, perfmon can output quite a lot of data about a server into a CSV file, but there's no good way to view it. A lot of scripts could/have been written that would populate CSV files. I could write these all day long. My problem is that I need a great viewer. I've seen quite a few things that will take a CSV file and after a lot of tweaking and user adjustment produce a static gif/png image. A static image doesn't do me a lot of good, because I have to look at it, then re-calibrate the parameters of the program, regenerate the image, repeat. That sucks. I could do this in Excel. Ideally, I would want a FLUID graph viewer. On the fly, I can adjust how much of my timeline I'm viewing. I could adjust the scaling so that one big spike doesn't make 99.9% of the data an unreadable line across the bottom of the X axis. Stuff like that. I should be able to say "show me CSV column 3 and column 5 as graphs. Show me the data scaled for 20 or 150 entries, and let me slide that window up and down the column of data. Auto scale to fit 95% of data within the Y axis and let crazy spikes go off the screen." Maybe I'm terribly spoiled by how you can drag, zoom, and slide data around on my iPad. I want to be able to view a spreadsheet of data with that fluidity and not have to guess at what sort of static snapshot I want to create from it. I don't want to have to make a study of how to tweak some data plotting program to let me import my file and do what I could just do in Excel. I want to scale, zoom, and transform my graph on the fly and then export a snapshot of it once I have it the way I want it. Is there anything out there that fills this need? I'll take linux, osx, win32 or even iOS suggestions.

    Read the article

  • Spotlight Infinite Indexing issue (external data drive)

    - by Manca Weeks
    This is an external drive, formerly a boot drive which is now in use only to access music files (sibelius, audio, midi, live, logic etc.) without transferring the data into a new boot system, partly because of the issue I am about to describe, but mostly because the majority of the data is mainly there for archival purposes. The user is a composer and prominent musician and needs to be able to rehash the data at will. I have tried several things - here is a list: - make complete filesystem clone with antonio diaz's ddrescue - run Disk Warrior on copy, repair whatever errors occurred - wipe out all ACLs on entire drive - set all permissions to the same value - wide open 777 - remove any system data (applications, system files, including hidden files to the best of my knowledge) by selecting only non-system/app data and using Carbon Copy Cloner to put only the data of interest onto a newly formatted drive - transfer data to newly formatted drive folder by folder, resetting the spotlight index in between adding each to observe for issues (interesting here is that no issues occurred except for in Documents folder - when I transferred only the Documents folder to a newly formatted drive on its own - no trouble. It appears almost as thought it may not be the content but the quantity or specific combination of data that results in problems) - use DataRescue to transfer the data to yet another newly formatted drive to expose any missed hidden files Between each of the above steps I stopped Spotlight (search for anything beginning with md in Activity Monitor - All Processes and quitting it), deleted the .Spotlight-V100 directory from the affected drive. Restart Splotlight indexing by adding drive to Spotlight privacy list and removing it. In each case the same issue occurs - Spotlight begins indexing normally (or so it seems), then the index estimated time increases, usually to 4 hours remaining. This is where it gets stuck and continues to predict 4 hours remaining but never finishes. Sometimes I can't eject the drive and have to quit the md.. processes from Activity Monitor to be able to eject the drive without Force Eject. Once I disconnect the drive after the 4 hours remaining situation - if I reattach it, Spotlight forever estimates remaining time and never gets going again. So there it is. It is apparently not a filesystem issue, not a permissions issue and not tied to any particular piece of hardware or protocol (used USB and FW drives). I have tried this on several machines (3 to be precise) and in 10.5.8 and 10.6.5. Simply disabling Spotlight on this volume is not an option because the owner has no clue where things are as the data on the volume dates back to music projects and compositions from 2003 and before. He needs to be able to query for results. Anyone got any ideas? Thanks, M

    Read the article

  • Change number in last row in data seperated with commas in NotePad++

    - by user329311
    I have rows of data all separated with commas. How can I replace the last numbers after the last commas with the number 5 in NotePad++? For example: How do I replace 9, 17 and 124 with 5 in the below data? I have millions of rows though of data and Excel doesn't have enough rows for all the data. Sample data: 2009.10.21,05:31,1.49312,1.49312,1.49306,1.49306,9 2009.10.21,05:32,1.49306,1.49308,1.49303,1.49305,17 2009.10.21,05:33,1.49305,1.4931,1.49305,1.49309,124 Thank you for your help.

    Read the article

  • Accessing temperature data on ESXi5.1?

    - by Ovesh
    For ESXi5.1 (VMWare vSphere) images, I can see temperature data on the vSphere user interface (under Monitor / Hardware Status). I tried scouring the available snmp data using snmpwalk, but can't find the data anywhere in there. Maybe I'm missing something. Does anybody know the right MIB for temperature data? Otherwise, ow can that data be accessed? By the way, this is a machine installed from an image provided by HP.

    Read the article

  • WPF animation/UI features performance and benchmarking

    - by Rich
    I'm working on a relatively small proof-of-concept for some line of business stuff with some fancy WPF UI work. Without even going too crazy, I'm already seeing some really poor performance when using a lot of the features that I thought were the main reason to consider WPF for UI building in the first place. I asked a question on here about why my animation was being stalled the first time it was run, and at the end what I found was that a very simple UserControl was taking almost half a second just to build its visual tree. I was able to get a work around to the symptom, but the fact that it takes that long to initialize a simple control really bothers me. Now, I'm testing my animation with and without the DropShadowEffect, and the result is night and day. A subtle drop shadow makes my control look so much nicer, but it completely ruins the smoothness of the animation. Let me not even start with the font rendering either. The calculation of my animations when the control has a bunch of gradient brushes and a drop shadow make the text blurry for about a full second and then slowly come into focus. So, I guess my question is if there are known studies, blog posts, or articles detailing which features are a hazard in the current version of WPF for business critical applications. Are things like Effects (ie. DropShadowEffect), gradient brushes, key frame animations, etc going to have too much of a negative effect on render quality (or maybe the combinations of these things)? Is the final version of WPF 4.0 going to correct some of these issues? I've read that VS2010 beta has some of these same issues and that they are supposed to be resolved by final release. Is that because of improvements to WPF itself or because half of the application will be rebuilt with the previous technology?

    Read the article

  • Hiring a programmer: looking for the "right attitude"

    - by Totophil
    It's actually two questions in one: What is the right attitude for a programmer? How do you (or would you) look for one when interviewing or during hiring process? Please note this question is not about personality or traits of a candidate, it is about their attitude towards what they do for living. This is also not about reverse of programmers pet peeves. The question has been made community wiki, since I am interested in a good answer rather than reputation. I disagree that the question is purely subjective and just a matter of opinion: clearly some attitudes make a better programmer than others. Consecutively, there might quite possibly exist an attitude that is common to the most of the better programmers. Update: After some deliberation I came up with the following attitude measurement scales: identifies themselves with the job ? fully detached perceives code as a collection of concepts ? sees code as a sequence of steps thinks of creating software as an art ? takes 100% rational approach to design and development Answers that include some sort of a comment on the appropriateness of these scales are greatly appreciated. Definition of "attitude": a complex mental state involving beliefs and feelings and values and dispositions to act in certain ways; "he had the attitude that work was fun" The question came as a result of some reflection on the top voted answer to "How do you ensure code quality?" here on Stack Overflow.

    Read the article

  • Visual Studio project remains "stuck" when stopped

    - by Traveling Tech Guy
    Hi, Currently developing a connector DLL to HP's Quality Center. I'm using their (insert expelative) COM API to connect to the server. An Interop wrapper gets created automatically by VStudio. My solution has 2 projects: the DLL and a tester application - essentially a form with buttons that call functions in the DLL. Everything works well - I can create defects, update them and delete them. When I close the main form, the application stops nicely. But when I call a function that returns a list of all available projects (to fill a combo box), if I close the main form, VStudio still shows the solution as running and I have to stop it. I've managed to pinpoint a single function in my code that when I call, the solution remains "hung" and if I don't, it closes well. It's a call to a property in the TDC object get_VisibleProjects that returns a List (not the .Net one, but a type in the COM library) - I just iterate over it and return a proper list (that I later use to fill the combo box): public List<string> GetAvailableProjects() { List<string> projects = new List<string>(); foreach (string project in this.tdc.get_VisibleProjects(qcDomain)) { projects.Add(project); } return projects; } My assumption is that something gets retained in memory. If I run the EXE outside of VStudio it closes - but who knows what gets left behind in memory? My question is - how do I get rid of whatever calling this property returns? Shouldn't the GC handle this? Do I need to delve into pointers? Things I've tried: getting the list into a variable and setting it to null at the end of the function Adding a destructor to the class and nulling the tdc object Stepping through the tester function application all the way out, whne the form closes and the Main function ends - it closes, but VStudio still shows I'm running. Thanks for your assistance!

    Read the article

  • Is there any library/software available that can give me useful measurements of image quality?

    - by Thor84no
    I realise measuring image quality in software is going to be really difficult, and I'm not looking for a quick-fix. Googling this is largely showing up research papers and discussions that go a bit over my head, so I was wondering if anyone in the SO community had any experience with doing any rough image quality assessment? I want to use this to scan a few thousand images and whittle it down to a few dozen images that are most likely of poor quality. I could then show these to a user and leave the rest to them. Obviously there are many metrics that can be a part of whether an image is of high/low quality, I'd be happy with anything that could take an image as an input and give some reasonable metrics to any of the basic image quality metrics like sharpness, dynamic range, noise, etc., leaving it up to my software to determine what's acceptable and what isn't. Some of the images are poor quality because they've been up-scaled drastically. If there isn't a way of getting metrics like I suggested above, is there any way to detect that an image has been up-scaled like this?

    Read the article

  • juju illegal base64 data at input byte 9

    - by ayr-ton
    After bootstrap a environment via manual provisioning, juju give me the following output for juju status: ERROR Unable to connect to environment "manual". Please check your credentials or use 'juju bootstrap' to create a new environment. Error details: illegal base64 data at input byte 9 And doing bootstrap again shows me: WARNING ignoring environments.yaml: using bootstrap config in file "/home/ayrton/.juju/environments/manual.jenv" ERROR illegal base64 data at input byte 9 The first bootstrap shows me no error, but the status crash as above and the second one output is just the base64 error. My juju version is 1.19.4-trusty-amd64, running in trusty 64. The bootstrap environment is a VPS with 1GB of memory, 20GB of hd and precise 64bits. Please, let me know if I can provide any further information.

    Read the article

  • How to handle business rules with a REST API?

    - by Ciprio
    I have a REST API to manage a booking system I'm searching how to manage this situation : A customer can book a time slot : A TimeSlot resource is created and linked to a Person resource. In order to create the link between a time lot and a person, the REST client send a POST request on the TimeSlot resource But if too many people booked the same slot (let's say the limit is 5 links), it must be impossible to create more associations. How can I handle this business restriction ? Can I return a 404 status code with a JSON response detailing the error with a status code ? Is it a RESTFul approach ? EDIT : Like suggested below I used status 409 Conflict in addition to a JSON response detailing the error

    Read the article

  • USB packets - receive wrong data

    - by regorianer
    i have a little python script which shows me the packets of an enocean device and does some events depending on the packet type. unfortunately it doesn't work because i'm getting wrong packets. Parts of the python script (used pySerial): Blockquote ser = serial.Serial('/dev/ttyUSB1',57600,bytesize = serial.EIGHTBITS,timeout = 1, parity = serial.PARITY_NONE , rtscts = 0) print 'clearing buffer' s = ser.read(10000) print 'start read' while 1: s = ser.read(1) for character in s: sys.stdout.write(" %s" % character.encode('hex')) print 'end' ser.close() output baudrate 57600: e0 e0 00 e0 00 e0 e0 e0 e0 e0 00 e0 e0 00 00 00 00 00 00 00 e0 e0 e0 00 00 00 00 e0 e0 e0 00 00 e0 e0 e0 e0 e0 00 e0 00 e0 e0 e0 e0 e0 00 e0 e0 00 00 00 00 00 00 e0 e0 e0 00 00 00 00 e0 e0 e0 00 00 e0 e0 e0 output baudrate 9600: a5 5a 0b 05 10 00 00 00 00 15 c4 56 20 6f a5 5a 0b 05 00 00 00 00 00 15 c4 56 20 5f linux terminal baudrate 57600: $stty -F /dev/ttyUSB1 57600 $stty < /dev/ttyUSB1 speed 57600 baud; line = 0; eof = ^A; min = 0; time = 0; -brkint -icrnl -imaxbel -opost -onlcr -isig -icanon -iexten -echo -echoe -echok -echoctl -echoke $while (true) do cat -A /dev/ttyUSB1 ; done myfile $hexdump -C myfile 00000000 4d 2d 60 4d 2d 60 5e 40 4d 2d 60 5e 40 4d 2d 60 |M-M-^@M-^@M-| 00000010 4d 2d 60 4d 2d 60 4d 2d 60 4d 2d 60 5e 40 4d 2d |M-M-M-M-^@M-| 00000020 60 4d 2d 60 5e 40 5e 40 5e 40 5e 40 5e 40 5e 40 |M-^@^@^@^@^@^@| 00000030 5e 40 4d 2d 60 4d 2d 60 4d 2d 60 5e 40 5e 40 5e |^@M-M-M-`^@^@^| 00000040 40 5e 40 4d 2d 60 4d 2d 60 4d 2d 60 |@^@M-M-M-`| 0000004c linux terminal baudrate 9600: $hexdump -C myfile2 00000000 5e 40 5e 55 4d 2d 44 56 30 4d 2d 3f 5e 40 5e 40 |^@^UM-DV0M-?^@^@| 00000010 5e 55 4d 2d 44 56 20 5f |^UM-DV _| 00000018 the specification says: 0x55 sync byte 1st 0xNNNN data length bytes (2 bytes) 0x07 opt length byte 0x01 type byte CRC, data, opt data und nochmal CRC but I'm not getting this packet structure. The output of the python script differs from the one I get via the terminal. I also wrote the python part with C, but the output is the same as with python As the USB receiver a BSC-BoR USB Receiver/Sender is used The EnOcean device is a simple button

    Read the article

  • Managing multiple reverse proxies for one virtual host in apache2

    - by Chris Betti
    I have many reverse proxies defined for my js-host VirtualHost, like so: /etc/apache2/sites-available/js-host <VirtualHost *:80> ServerName js-host.example.com [...] ProxyPreserveHost On ProxyPass /serviceA http://192.168.100.50/ ProxyPassReverse /serviceA http://192.168.100.50/ ProxyPass /serviceB http://192.168.100.51/ ProxyPassReverse /serviceB http://192.168.100.51/ [...] ProxyPass /serviceZ http://192.168.100.75/ ProxyPassReverse /serviceZ http://192.168.100.75/ </VirtualHost> The js-host site is acting as shared config for all of the reverse proxies. This works, but managing the proxies involves edits to the shared config, and an apache2 restart. Is there a way to manage individual proxies with a2ensite and a2dissite (or a better alternative)? My main objective is to isolate each proxy config as a separate file, and manage it via commands. First Attempt I tried making separate files with their own VirtualHost entries for each service: /etc/apache2/sites-available/js-host-serviceA <VirtualHost *:80> ServerName js-host.example.com [...] ProxyPass /serviceA http://192.168.100.50/ ProxyPassReverse /serviceA http://192.168.100.50/ </VirtualHost> /etc/apache2/sites-available/js-host-serviceB <VirtualHost *:80> ServerName js-host.example.com [...] ProxyPass /serviceB http://192.168.100.51/ ProxyPassReverse /serviceB http://192.168.100.51/ </VirtualHost> The problem with this is apache2 loads the first VirtualHost for a particular ServerName, and ignores the rest. They aren't "merged" somehow as I'd hoped.

    Read the article

  • Parallelism in .NET – Part 4, Imperative Data Parallelism: Aggregation

    - by Reed
    In the article on simple data parallelism, I described how to perform an operation on an entire collection of elements in parallel.  Often, this is not adequate, as the parallel operation is going to be performing some form of aggregation. Simple examples of this might include taking the sum of the results of processing a function on each element in the collection, or finding the minimum of the collection given some criteria.  This can be done using the techniques described in simple data parallelism, however, special care needs to be taken into account to synchronize the shared data appropriately.  The Task Parallel Library has tools to assist in this synchronization. The main issue with aggregation when parallelizing a routine is that you need to handle synchronization of data.  Since multiple threads will need to write to a shared portion of data.  Suppose, for example, that we wanted to parallelize a simple loop that looked for the minimum value within a dataset: double min = double.MaxValue; foreach(var item in collection) { double value = item.PerformComputation(); min = System.Math.Min(min, value); } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } This seems like a good candidate for parallelization, but there is a problem here.  If we just wrap this into a call to Parallel.ForEach, we’ll introduce a critical race condition, and get the wrong answer.  Let’s look at what happens here: // Buggy code! Do not use! double min = double.MaxValue; Parallel.ForEach(collection, item => { double value = item.PerformComputation(); min = System.Math.Min(min, value); }); This code has a fatal flaw: min will be checked, then set, by multiple threads simultaneously.  Two threads may perform the check at the same time, and set the wrong value for min.  Say we get a value of 1 in thread 1, and a value of 2 in thread 2, and these two elements are the first two to run.  If both hit the min check line at the same time, both will determine that min should change, to 1 and 2 respectively.  If element 1 happens to set the variable first, then element 2 sets the min variable, we’ll detect a min value of 2 instead of 1.  This can lead to wrong answers. Unfortunately, fixing this, with the Parallel.ForEach call we’re using, would require adding locking.  We would need to rewrite this like: // Safe, but slow double min = double.MaxValue; // Make a "lock" object object syncObject = new object(); Parallel.ForEach(collection, item => { double value = item.PerformComputation(); lock(syncObject) min = System.Math.Min(min, value); }); This will potentially add a huge amount of overhead to our calculation.  Since we can potentially block while waiting on the lock for every single iteration, we will most likely slow this down to where it is actually quite a bit slower than our serial implementation.  The problem is the lock statement – any time you use lock(object), you’re almost assuring reduced performance in a parallel situation.  This leads to two observations I’ll make: When parallelizing a routine, try to avoid locks. That being said: Always add any and all required synchronization to avoid race conditions. These two observations tend to be opposing forces – we often need to synchronize our algorithms, but we also want to avoid the synchronization when possible.  Looking at our routine, there is no way to directly avoid this lock, since each element is potentially being run on a separate thread, and this lock is necessary in order for our routine to function correctly every time. However, this isn’t the only way to design this routine to implement this algorithm.  Realize that, although our collection may have thousands or even millions of elements, we have a limited number of Processing Elements (PE).  Processing Element is the standard term for a hardware element which can process and execute instructions.  This typically is a core in your processor, but many modern systems have multiple hardware execution threads per core.  The Task Parallel Library will not execute the work for each item in the collection as a separate work item. Instead, when Parallel.ForEach executes, it will partition the collection into larger “chunks” which get processed on different threads via the ThreadPool.  This helps reduce the threading overhead, and help the overall speed.  In general, the Parallel class will only use one thread per PE in the system. Given the fact that there are typically fewer threads than work items, we can rethink our algorithm design.  We can parallelize our algorithm more effectively by approaching it differently.  Because the basic aggregation we are doing here (Min) is communitive, we do not need to perform this in a given order.  We knew this to be true already – otherwise, we wouldn’t have been able to parallelize this routine in the first place.  With this in mind, we can treat each thread’s work independently, allowing each thread to serially process many elements with no locking, then, after all the threads are complete, “merge” together the results. This can be accomplished via a different set of overloads in the Parallel class: Parallel.ForEach<TSource,TLocal>.  The idea behind these overloads is to allow each thread to begin by initializing some local state (TLocal).  The thread will then process an entire set of items in the source collection, providing that state to the delegate which processes an individual item.  Finally, at the end, a separate delegate is run which allows you to handle merging that local state into your final results. To rewriting our routine using Parallel.ForEach<TSource,TLocal>, we need to provide three delegates instead of one.  The most basic version of this function is declared as: public static ParallelLoopResult ForEach<TSource, TLocal>( IEnumerable<TSource> source, Func<TLocal> localInit, Func<TSource, ParallelLoopState, TLocal, TLocal> body, Action<TLocal> localFinally ) The first delegate (the localInit argument) is defined as Func<TLocal>.  This delegate initializes our local state.  It should return some object we can use to track the results of a single thread’s operations. The second delegate (the body argument) is where our main processing occurs, although now, instead of being an Action<T>, we actually provide a Func<TSource, ParallelLoopState, TLocal, TLocal> delegate.  This delegate will receive three arguments: our original element from the collection (TSource), a ParallelLoopState which we can use for early termination, and the instance of our local state we created (TLocal).  It should do whatever processing you wish to occur per element, then return the value of the local state after processing is completed. The third delegate (the localFinally argument) is defined as Action<TLocal>.  This delegate is passed our local state after it’s been processed by all of the elements this thread will handle.  This is where you can merge your final results together.  This may require synchronization, but now, instead of synchronizing once per element (potentially millions of times), you’ll only have to synchronize once per thread, which is an ideal situation. Now that I’ve explained how this works, lets look at the code: // Safe, and fast! double min = double.MaxValue; // Make a "lock" object object syncObject = new object(); Parallel.ForEach( collection, // First, we provide a local state initialization delegate. () => double.MaxValue, // Next, we supply the body, which takes the original item, loop state, // and local state, and returns a new local state (item, loopState, localState) => { double value = item.PerformComputation(); return System.Math.Min(localState, value); }, // Finally, we provide an Action<TLocal>, to "merge" results together localState => { // This requires locking, but it's only once per used thread lock(syncObj) min = System.Math.Min(min, localState); } ); Although this is a bit more complicated than the previous version, it is now both thread-safe, and has minimal locking.  This same approach can be used by Parallel.For, although now, it’s Parallel.For<TLocal>.  When working with Parallel.For<TLocal>, you use the same triplet of delegates, with the same purpose and results. Also, many times, you can completely avoid locking by using a method of the Interlocked class to perform the final aggregation in an atomic operation.  The MSDN example demonstrating this same technique using Parallel.For uses the Interlocked class instead of a lock, since they are doing a sum operation on a long variable, which is possible via Interlocked.Add. By taking advantage of local state, we can use the Parallel class methods to parallelize algorithms such as aggregation, which, at first, may seem like poor candidates for parallelization.  Doing so requires careful consideration, and often requires a slight redesign of the algorithm, but the performance gains can be significant if handled in a way to avoid excessive synchronization.

    Read the article

  • Identifying Data Model Changes Between EBS 12.1.3 and Prior EBS Releases

    - by Steven Chan
    The EBS 12.1.3 Release Content Document (RCD, Note 561580.1) summarizes the latest functional and technology stack-related updates in a specific release.  The E-Business Suite Electronic Technical Reference Manual (eTRM) summarizes the database objects in a specific EBS release.  Those are useful references, but sometimes you need to find out which database objects have changed between one EBS release and another.  This kind of information about the differences or deltas between two releases is useful if you have customized or extended your EBS instance and plan to upgrade to EBS 12.1.3. Where can you find that information?Answering that question has just gotten a lot easier.  You can now use a new EBS Data Model Comparison Report tool:EBS Data Model Comparison Report Overview (Note 1290886.1)This new tool lists the database object definition changes between the following source and target EBS releases:EBS 11.5.10.2 and EBS 12.1.3EBS 12.0.4 and EBS 12.1.3EBS 12.1.1 and EBS 12.1.3EBS 12.1.2 and EBS 12.1.3For example, here's part of the report comparing Bill of Materials changes between 11.5.10.2 and 12.1.3:

    Read the article

  • Manchester UG Presentation Video

    In July I was invited to speak at the UK SQL Server UG event in Manchester.  I spoke about Excel being a good data mining client.  I was a little rushed at the end as Chris Testa-ONeill told me I had only 5 minutes to go when I had only been talking for 10 minutes.  Apparently I have a reputation for running over my time allocation.  At the event we also had a product demo from SQL Sentry around their BI monitoring dashboard solution.  This includes SSIS but the main thrust was SSAS Then came Chris with a look at Analysis Services.  If you have never heard Chris talk then take the opportunity now, he is a top class presenter and I am often found sat at the back of his classes. Here is the video link

    Read the article

< Previous Page | 181 182 183 184 185 186 187 188 189 190 191 192  | Next Page >