Search Results

Search found 3996 results on 160 pages for 'operations'.

Page 12/160 | < Previous Page | 8 9 10 11 12 13 14 15 16 17 18 19  | Next Page >

  • IS it ok to use REST for CRUD operations?

    - by l0l0l0l0l
    Recently I moved to Laravel and I was surprised on how good setting the controllers as RESTful is, it made routes and my code cleaner. I'm kinda new on web development and never used REST before since all my clients' projects are basically CRUD operations. Are there any cool buzzword to this "approach" or I'm just stupid for doing it? I don't plan to follow any REST patterns, just to make my life easier and code cleaner. Basically just GET/POST, the other ones are not native anyway so (emulated on hidden form value).

    Read the article

  • One position right barrel shift using ALU Operators?

    - by Tomek
    I was wondering if there was an efficient way to perform a shift right on an 8 bit binary value using only ALU Operators (NOT, OR, AND, XOR, ADD, SUB) Example: input: 00110101 output: 10011010 I have been able to implement a shift left by just adding the 8 bit binary value with itself since a shift left is equivalent to multiplying by 2. However, I can't think of a way to do this for shift right. The only method I have come up with so far is to just perform 7 left barrel shifts. Is this the only way?

    Read the article

  • Is is possible to use IOCP (or other API) in Reactor-style operations?

    - by Artyom
    Hello, Is there any scalable Win32 API (like IOCP not like select) that gives you reactor style operations on sockets? AFAIK IOCP allows you to receive notification on completed operations like data read or written (proactor) but I'm looking for reactor style of operations: I need to get notification when the socket is readable or writable (reactor). Something similar to epoll, kqueue, /dev/poll ? Is there such API in Win32? If so where can I find a manual on it? Clearification: I need select like api for sockets that is as scalable as IOCP, or I'm looking for a way to use IOCP in reactor like operations.

    Read the article

  • Is is possible to use IOCP (or other API) in reactor stle operations?

    - by Artyom
    Hello, Is there any scalable Win32 API (like IOCP not like select) that gives you reactor style operations on sockets? AFAIK IOCP allows you to receive notification on completed operations like data read or written (proactor) but I'm looking for reactor style of operations: I need to get notification when the socket is readable or writable (reactor). Something similar to epoll, kqueue, /dev/poll ? Is there such API in Win32? If so where can I find a manual on it?

    Read the article

  • Why does adding Crossover to my Genetic Algorithm gives me worse results?

    - by MahlerFive
    I have implemented a Genetic Algorithm to solve the Traveling Salesman Problem (TSP). When I use only mutation, I find better solutions than when I add in crossover. I know that normal crossover methods do not work for TSP, so I implemented both the Ordered Crossover and the PMX Crossover methods, and both suffer from bad results. Here are the other parameters I'm using: Mutation: Single Swap Mutation or Inverted Subsequence Mutation (as described by Tiendil here) with mutation rates tested between 1% and 25%. Selection: Roulette Wheel Selection Fitness function: 1 / distance of tour Population size: Tested 100, 200, 500, I also run the GA 5 times so that I have a variety of starting populations. Stop Condition: 2500 generations With the same dataset of 26 points, I usually get results of about 500-600 distance using purely mutation with high mutation rates. When adding crossover my results are usually in the 800 distance range. The other confusing thing is that I have also implemented a very simple Hill-Climbing algorithm to solve the problem and when I run that 1000 times (faster than running the GA 5 times) I get results around 410-450 distance, and I would expect to get better results using a GA. Any ideas as to why my GA performing worse when I add crossover? And why is it performing much worse than a simple Hill-Climb algorithm which should get stuck on local maxima as it has no way of exploring once it finds a local max?

    Read the article

  • I (think) I want to use a BItWise Operator to check useraccountcontrol property!

    - by Jim
    Hello, Here's some code: DirectorySearcher searcher = new DirectorySearcher(); searcher.Filter = "(&(objectClass=user)(sAMAccountName=" + lstUsers.SelectedItem.Text + "))"; SearchResult result = searcher.FindOne(); Within result.Properties["useraccountcontrol"] will be an item which will give me a value depending on the state of the account. For instance, a value of 66050 means I'm dealing with: A normal account; where the password does not expire;which has been disabled. Explanation here. What's the most concise way of finding out if my value "contains" the AccountDisable flag (which is 2) Thanks in advance!

    Read the article

  • Is this code well-defined?

    - by Nawaz
    This code is taken from a discussion going on here. someInstance.Fun(++k).Gun(10).Sun(k).Tun(); Is this code well-defined? Is ++k Fun() evaluated before k in Sun()? What if k is user-defined type, not built-in type? And in what ways the above function calls order is different from this: eat(++k);drink(10);sleep(k); As far as I can say, in both situations, there exists a sequence point after each function call. If so, then why can't the first case is also well-defined like the second one? Section 1.9.17 of the C++ ISO standard says this about sequence points and function evaluation: When calling a function (whether or not the function is inline), there is a sequence point after the evaluation of all function arguments (if any) which takes place before execution of any expressions or statements in the function body. There is also a sequence point after the copying of a returned value and before the execution of any expressions outside the function.

    Read the article

  • Why does adding Crossover to my Genetic Algorithm give me worse results?

    - by MahlerFive
    I have implemented a Genetic Algorithm to solve the Traveling Salesman Problem (TSP). When I use only mutation, I find better solutions than when I add in crossover. I know that normal crossover methods do not work for TSP, so I implemented both the Ordered Crossover and the PMX Crossover methods, and both suffer from bad results. Here are the other parameters I'm using: Mutation: Single Swap Mutation or Inverted Subsequence Mutation (as described by Tiendil here) with mutation rates tested between 1% and 25%. Selection: Roulette Wheel Selection Fitness function: 1 / distance of tour Population size: Tested 100, 200, 500, I also run the GA 5 times so that I have a variety of starting populations. Stop Condition: 2500 generations With the same dataset of 26 points, I usually get results of about 500-600 distance using purely mutation with high mutation rates. When adding crossover my results are usually in the 800 distance range. The other confusing thing is that I have also implemented a very simple Hill-Climbing algorithm to solve the problem and when I run that 1000 times (faster than running the GA 5 times) I get results around 410-450 distance, and I would expect to get better results using a GA. Any ideas as to why my GA performing worse when I add crossover? And why is it performing much worse than a simple Hill-Climb algorithm which should get stuck on local maxima as it has no way of exploring once it finds a local max?

    Read the article

  • Sikuli List of Functions & Operators

    - by PPTim
    Hello, I've just discovered Sikuli, and would like to see a comprehensive functions list without digging through the online-examples and demos. Has anyone found such a list? Furthermore, apparently Sikuli supports more complex loops and function calls as well, and seems to be based in Python(!!). Examples would be great. Thanks.

    Read the article

  • Is it faster to use a complicated boolean to limit a ResultSet at the MySQL end or at the Java end?

    - by javanix
    Lets say I have a really big table filled with lots of data (say, enough not to fit comfortably in memory), and I want to analyze a subset of the rows. Is it generally faster to do: SELECT (column1, column2, ... , columnN) FROM table WHERE (some complicated boolean clause); and then use the ResultSet, or is it faster to do: SELECT (column1, column2, ... , columnN) FROM table; and then iterate over the ResultSet, accepting different rows based on a java version of your boolean condition? I think it comes down to whether the Java iterator/boolean evaluator is faster than the MySQL boolean evaluator.

    Read the article

  • im writing a spellchecking program, how do i replace ch in a string..eg..

    - by Ajay Hopkins
    what am i doing wrong/what can i do?? import sys import string def remove(file): punctuation = string.punctuation for ch in file: if len(ch) > 1: print('error - ch is larger than 1 --| {0} |--'.format(ch)) if ch in punctuation: ch = ' ' return ch else: return ch ref = (open("ref.txt","r")) test_file = (open("test.txt", "r")) dictionary = ref.read().split() file = test_file.read().lower() file = remove(file) print(file) p.s, this is in Python 3.1.2

    Read the article

  • How to reverse bitwise AND (&) in C ?

    - by VaioIsBorn
    For example i have an operation in C like this: ((unsigned int)ptr & 0xff000000)) The result is bf000000. What do i need at this moment is how to reverse the above i.e. determine the ptr by using the result from the operation and offcourse 0xff000000 . I am asking if there's any simple way to implement this in C, tnx.

    Read the article

  • fastest method for minimum of two numbers

    - by user85030
    I was going through mit's opencourseware related to performance engineering. The quickest method (requiring least number of clock cycles) for finding the minimum of two numbers(say x and y) is stated as: min= y^((x^y) & -(x<y)) The output of the expression x < y can be 0 or 1 (assuming C is being used) which then changes to -0 or -1. I understand that xor can be used to swap two numbers. Questions: 1. How is -0 different from 0 and -1 in terms of binary? 2. How is that result used with the and operator to get the minimum? Thanks in advance.

    Read the article

  • Is this kind of design - a class for Operations On Object - correct?

    - by Mithir
    In our system we have many complex operations which involve many validations and DB activities. One of the main Business functionality could have been designed better. In short, there were no separation of layers, and the code would only work from the scenario in which it was first designed at, and now there were more scenarios (like requests from an API or from other devices) So I had to redesign. I found myself moving all the DB code to objects which acts like Business to DB objects, and I've put all the business logic in an Operator kind of a class, which I've implemented like this: First, I created an object which will hold all the information needed for the operation let's call it InformationObject. Then I created an OperatorObject which will take the InformationObject as a parameter and act on it. The OperatorObject should activate different objects and validate or check for existence or any scenario in which the business logic is compromised and then make the operation according to the information on the InformationObject. So my question is - Is this kind of implementation correct? PS, this Operator only works on a single Business-wise Operation.

    Read the article

  • What would make a noise in a PC on graphics operations on a passively-cooled system?

    - by T.J. Crowder
    I have this system based on the Intel D510MO motherboard, which is basically an Atom D510 (dual-core HT Atom w/built-in GPU), an Intel NM10 chipset, and a Realtek Gigabit LAN controller. It's entirely passively cooled. I noticed almost immediately that there was a kind of very, very soft noise that corresponded with graphics operations, sort of the noise you'd get if you had a sheet of flat paper and slid something really light across it — but more electronic than that. I wrote it off as observation error and/or disk activity triggered by the graphics operation (although the latter seemed like a lot of unnecessary disk activity). It isn't. I got curious enough that I finally did a few controlled experiments, and here's what I've determined: It isn't the HDD. For one thing, the sounds the HDD makes (when seeking, when reading or writing, when just sitting there spinning) is different. For another, I used sudo hdparm -y /dev/sda (I'm using Ubuntu 10.04 LTS) to temporarily put the disk on standby while making sure that non-disk graphics op was happening in a loop. The disk spun down, but the other sound continued, corresponding perfectly with the timing of the graphics op. (Then the disk spun up again, but it takes long enough that I could rule out the HDD.) It isn't the monitor; I ensured the two were well physically-separated and the sound was definitely coming from the main box. It isn't something else in the room; the sound is coming from the box. It isn't cross-talk to an audio circuit coming out the speakers. (It doesn't have any speakers.) It isn't my mouse (e.g., when I'm trying to make graphics ops happen); the sound happens if I set up a recurring operation and don't use the mouse at all, or if I lift the mouse off the table slightly (but enough that the laser still registers movement). It isn't the voices in my head; they never whisper like that. Other observations: It doesn't seem to matter what the graphics operation is; anything that changes what's on the screen seems to do it. I get the sound when moving the mouse over the Chromium tab bar (which makes the tab backgrounds change); I get it when a web page has a counter on it that changes the text on the page: I get it when dragging window contents around. The sound is very, very slightly louder if the graphics op is larger, like scrolling a text area when writing a question on superuser.com, than for smaller operations like the tick counter on the web page. But it's very slight. It's fairly loud (and of good duration) when the op involves color changes to substantial surface areas. For instance, when asking a question here on superuser and you move the cursor between the question box and the tag box, and the help to the right fades out, changes, and fades back in. (Yet another example related to the web browser, so let me say: I hear it when operations completely unrelated to the web browser as well.) It doesn't sound like arcing or anything like that (I'd've shut off the machine Right Quick Like if it did). Moving windows does it. Scrolling windows (by and large) doesn't. I have the feeling I've heard this sort of thing before, when all system fans were on low and such, with other systems — but (again) written it off as observational error. For all the world it's like I'm hearing the CPU working (as opposed to the GPU; note the window scroll thing above) or data being transferred somewhere, but that just seems...unlikely. So what am I hearing? This may seem like a very localized question, but perhaps other silent PC enthusiasts may be interested as well...

    Read the article

  • Inside the Concurrent Collections: ConcurrentBag

    - by Simon Cooper
    Unlike the other concurrent collections, ConcurrentBag does not really have a non-concurrent analogy. As stated in the MSDN documentation, ConcurrentBag is optimised for the situation where the same thread is both producing and consuming items from the collection. We'll see how this is the case as we take a closer look. Again, I recommend you have ConcurrentBag open in a decompiler for reference. Thread Statics ConcurrentBag makes heavy use of thread statics - static variables marked with ThreadStaticAttribute. This is a special attribute that instructs the CLR to scope any values assigned to or read from the variable to the executing thread, not globally within the AppDomain. This means that if two different threads assign two different values to the same thread static variable, one value will not overwrite the other, and each thread will see the value they assigned to the variable, separately to any other thread. This is a very useful function that allows for ConcurrentBag's concurrency properties. You can think of a thread static variable: [ThreadStatic] private static int m_Value; as doing the same as: private static Dictionary<Thread, int> m_Values; where the executing thread's identity is used to automatically set and retrieve the corresponding value in the dictionary. In .NET 4, this usage of ThreadStaticAttribute is encapsulated in the ThreadLocal class. Lists of lists ConcurrentBag, at its core, operates as a linked list of linked lists: Each outer list node is an instance of ThreadLocalList, and each inner list node is an instance of Node. Each outer ThreadLocalList is owned by a particular thread, accessible through the thread local m_locals variable: private ThreadLocal<ThreadLocalList<T>> m_locals It is important to note that, although the m_locals variable is thread-local, that only applies to accesses through that variable. The objects referenced by the thread (each instance of the ThreadLocalList object) are normal heap objects that are not specific to any thread. Thinking back to the Dictionary analogy above, if each value stored in the dictionary could be accessed by other means, then any thread could access the value belonging to other threads using that mechanism. Only reads and writes to the variable defined as thread-local are re-routed by the CLR according to the executing thread's identity. So, although m_locals is defined as thread-local, the m_headList, m_nextList and m_tailList variables aren't. This means that any thread can access all the thread local lists in the collection by doing a linear search through the outer linked list defined by these variables. Adding items So, onto the collection operations. First, adding items. This one's pretty simple. If the current thread doesn't already own an instance of ThreadLocalList, then one is created (or, if there are lists owned by threads that have stopped, it takes control of one of those). Then the item is added to the head of that thread's list. That's it. Don't worry, it'll get more complicated when we account for the other operations on the list! Taking & Peeking items This is where it gets tricky. If the current thread's list has items in it, then it peeks or removes the head item (not the tail item) from the local list and returns that. However, if the local list is empty, it has to go and steal another item from another list, belonging to a different thread. It iterates through all the thread local lists in the collection using the m_headList and m_nextList variables until it finds one that has items in it, and it steals one item from that list. Up to this point, the two threads had been operating completely independently. To steal an item from another thread's list, the stealing thread has to do it in such a way as to not step on the owning thread's toes. Recall how adding and removing items both operate on the head of the thread's linked list? That gives us an easy way out - a thread trying to steal items from another thread can pop in round the back of another thread's list using the m_tail variable, and steal an item from the back without the owning thread knowing anything about it. The owning thread can carry on completely independently, unaware that one of its items has been nicked. However, this only works when there are at least 3 items in the list, as that guarantees there will be at least one node between the owning thread performing operations on the list head and the thread stealing items from the tail - there's no chance of the two threads operating on the same node at the same time and causing a race condition. If there's less than three items in the list, then there does need to be some synchronization between the two threads. In this case, the lock on the ThreadLocalList object is used to mediate access to a thread's list when there's the possibility of contention. Thread synchronization In ConcurrentBag, this is done using several mechanisms: Operations performed by the owner thread only take out the lock when there are less than three items in the collection. With three or greater items, there won't be any conflict with a stealing thread operating on the tail of the list. If a lock isn't taken out, the owning thread sets the list's m_currentOp variable to a non-zero value for the duration of the operation. This indicates to all other threads that there is a non-locked operation currently occuring on that list. The stealing thread always takes out the lock, to prevent two threads trying to steal from the same list at the same time. After taking out the lock, the stealing thread spinwaits until m_currentOp has been set to zero before actually performing the steal. This ensures there won't be a conflict with the owning thread when the number of items in the list is on the 2-3 item borderline. If any add or remove operations are started in the meantime, and the list is below 3 items, those operations try to take out the list's lock and are blocked until the stealing thread has finished. This allows a thread to steal an item from another thread's list without corrupting it. What about synchronization in the collection as a whole? Collection synchronization Any thread that operates on the collection's global structure (accessing anything outside the thread local lists) has to take out the collection's global lock - m_globalListsLock. This single lock is sufficient when adding a new thread local list, as the items inside each thread's list are unaffected. However, what about operations (such as Count or ToArray) that need to access every item in the collection? In order to ensure a consistent view, all operations on the collection are stopped while the count or ToArray is performed. This is done by freezing the bag at the start, performing the global operation, and unfreezing at the end: The global lock is taken out, to prevent structural alterations to the collection. m_needSync is set to true. This notifies all the threads that they need to take out their list's lock irregardless of what operation they're doing. All the list locks are taken out in order. This blocks all locking operations on the lists. The freezing thread waits for all current lockless operations to finish by spinwaiting on each m_currentOp field. The global operation can then be performed while the bag is frozen, but no other operations can take place at the same time, as all other threads are blocked on a list's lock. Then, once the global operation has finished, the locks are released, m_needSync is unset, and normal concurrent operation resumes. Concurrent principles That's the essence of how ConcurrentBag operates. Each thread operates independently on its own local list, except when they have to steal items from another list. When stealing, only the stealing thread is forced to take out the lock; the owning thread only has to when there is the possibility of contention. And a global lock controls accesses to the structure of the collection outside the thread lists. Operations affecting the entire collection take out all locks in the collection to freeze the contents at a single point in time. So, what principles can we extract here? Threads operate independently Thread-static variables and ThreadLocal makes this easy. Threads operate entirely concurrently on their own structures; only when they need to grab data from another thread is there any thread contention. Minimised lock-taking Even when two threads need to operate on the same data structures (one thread stealing from another), they do so in such a way such that the probability of actually blocking on a lock is minimised; the owning thread always operates on the head of the list, and the stealing thread always operates on the tail. Management of lockless operations Any operations that don't take out a lock still have a 'hook' to force them to lock when necessary. This allows all operations on the collection to be stopped temporarily while a global snapshot is taken. Hopefully, such operations will be short-lived and infrequent. That's all the concurrent collections covered. I hope you've found it as informative and interesting as I have. Next, I'll be taking a closer look at ThreadLocal, which I came across while analyzing ConcurrentBag. As you'll see, the operation of this class deserves a much closer look.

    Read the article

< Previous Page | 8 9 10 11 12 13 14 15 16 17 18 19  | Next Page >