Search Results

Search found 37101 results on 1485 pages for 'array based'.

Page 247/1485 | < Previous Page | 243 244 245 246 247 248 249 250 251 252 253 254  | Next Page >

  • Relationship between "Task Parallel Library" and "Task-based Asynchronous Pattern"?

    - by Sid
    In the context of C#, .NET 4/4.5 used for an application running on a web-server, what is the relationship between "Task Parallel Library" and "Task-based Asynchronous Pattern"? I understand one is a library and the other is a pattern. But to dig deeper, is it like "The library is used by the pattern to enforce good practices". I'm also not clear if both are supported in .NET 4.0 (with awake and async keywords) Edit: Seems that awake and async are only in .NET 4.5 ...

    Read the article

  • FIX: A dump file is not generated for a .NET Framework 2.0-based application after the application c

    981878 ... FIX: A dump file is not generated for a .NET Framework 2.0-based application after the application closesThis RSS feed provided by kbAlerz.com.Visit kbAlertz.com to subscribe. It's 100% free and you'll be able to recieve e-mail or RSS updates for the technologies you pick from the Microsoft Knowledge Base....Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • What Instruments Does a Web Based Project Management System Offer Us?

    Nowadays, in order to successfully manage various and complex projects, a project owner has access to a multitude of web based software covering key areas of focus such as scheduling, cost control, budget management, resource allocation, documentation and communication. Managing projects becomes time and resource saving also maximizing collaboration between team members that, in certain situations must stay connected to the partial outcomes.

    Read the article

  • SQL Server Integration Services package to delete files from a Network or Local path based on date

    We have a requirement to delete a group of files that are older than the specified number of days from the company file share. Due to the complex folder hierarchy and delicate nature of the data stored in these files, this task has to be originated from SQL Server. However, due to company security policy, and based on SQL Server security best practices, we blocked access to OLE Automation stored procedures, CLR features, and xp_cmdshell. Is there any way to accomplish this task without using these features?

    Read the article

  • Ruby: reduce duplication while initialize hash

    - by user612308
    array = [0, 0.3, 0.4, 0.2, 0.6] hash = { "key1" => array[0..2], "key2" => array[0..3], "key3" => array, "key4" => array, "key5" => array, "key6" => array, "key7" => array } Is there a way I can remove the duplication by doing something like hash = { "key1" => array[0..2], "key2" => array[0..3], %(key3, key4, key5, key6, key7).each {|ele| ele => array} }

    Read the article

  • Can I use a Smart Array P410 controller on a HP Proliant DL 180 G5 server?

    - by Massimo
    This link says the HP Proliant DL 180 G5 server only supports Smart Array E200, E500, P400 and P800 controllers. Can I use a Smart Array P410 instead? Currently the server has only the default internal controller, I need an additional one for better performance and to support additional drives (the default controller only supports 4, with a Smart Array controller all 8 bays can be used). The disks are SATA ones.

    Read the article

  • Efficient method of getting all plist arrays into one array?

    - by cannyboy
    If I have a plist which is structured like this: Root Array Item 0 Dictionary City String New York People Array Item 0 String Steve Item 1 String Paul Item 2 String Fabio Item 3 String David Item 4 String Penny Item 1 Dictionary City String London People Array Item 0 String Linda Item 1 String Rachel Item 2 String Jessica Item 3 String Lou Item 2 Dictionary City String Barcelona People Array Item 0 String Edward Item 1 String Juan Item 2 String Maria Then what is the most efficient way of getting all the names of the people into one big NSArray?

    Read the article

  • How to Declare an Array of Generic Dictionaries in C#?

    - by Nave
    I want to create an array of Dictionaries. But the array size is unknown. For integer, i used List to obtain the integer array of unknown size. But in the case of Dictionary, am not able to create a list of Dictionary. Is thr any wayz by which this can be done? Dictionary(int, String) paramList=null;I am want to create the array of paramList(above). I am using C Sharp.

    Read the article

  • PHP output of foreach loop into a new array.

    - by RobHardgood
    I know I'm probably missing something easy, but I have a foreach loop and I'm trying to modify the values of the first array, and output a new array with the modifications as the new values. Basically I'm starting with an array: 0 = A:B 1 = B:C 2 = C:D And I'm using explode() to strip out the :'s and second letters, so I want to be left with an array: 0 = A 1 = B 2 = C The explode() part of my function works fine, but I only seem to get single string outputs. A, B, and C.

    Read the article

  • How to store array in session in asp.net mvc?

    - by mary
    Can you please tell me how to store an array in session and how to retrieve that array from session? I am trying to store one array of type Double and assigning values of the same type but it is showing me an error so please can anyone tell me how to assign values to the array which is in session? Thank You

    Read the article

  • How can I pass an array of floats to the fragment shader using textures?

    - by James
    I want to map out a 2D array of depth elements for the fragment shader to use to check depth against to create shadows. I want to be able to copy a float array into the GPU, but using large uniform arrays causes segfaults in openGL so that is not an option. I tried texturing but the best i got was to use GL_DEPTH_COMPONENT glTexImage2D(GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT, 512, 512, 0, GL_DEPTH_COMPONENT, GL_FLOAT, smap); Which doesn't work because that stores depth components (0.0 - 1.0) which I don't want because I have no idea how to calculate them using the depth value produced by the light sources MVP matrix multiplied by the coordinate of each vertex. Is there any way to store and access large 2D arrays of floats in openGL?

    Read the article

  • Why does my code dividing a 2D array into chunks fail?

    - by Borog
    I have a 2D-Array representing my world. I want to divide this huge thing into smaller chunks to make collision detection easier. I have a Chunk class that consists only of another 2D Array with a specific width and height and I want to iterate through the world, create new Chunks and add them to a list (or maybe a Map with Coordinates as the key; we'll see about that). world = new World(8192, 1024); Integer[][] chunkArray; for(int a = 0; a < map.getHeight() / Chunk.chunkHeight; a++) { for(int b = 0; b < map.getWidth() / Chunk.chunkWidth; b++) { Chunk chunk = new Chunk(); chunkArray = new Integer[Chunk.chunkWidth][Chunk.chunkHeight]; for(int x = Chunk.chunkHeight*a; x < Chunk.chunkHeight*(a+1); x++) { for(int y = Chunk.chunkWidth*b; y < Chunk.chunkWidth*(b+1); y++) { // Yes, the tileMap actually is [height][width] I'll have // to fix that somewhere down the line -.- chunkArray[y][x] = map.getTileMap()[x*a][y*b]; // TODO:Attach to chunk } } chunkList.add(chunk); } } System.out.println(chunkList.size()); The two outer loops get a new chunk in a specific row and column. I do that by dividing the overall size of the map by the chunkSize. The inner loops then fill a new chunkArray and attach it to the chunk. But somehow my maths is broken here. Let's assume the chunkHeight = chunkWidth = 64. For the first Array I want to start at [0][0] and go until [63][63]. For the next I want to start at [64][64] and go until [127][127] and so on. But I get an out of bounds exception and can't figure out why. Any help appreciated! Actually I think I know where the problem lies: chunkArray[y][x] can't work, because y goes from 0-63 just in the first iteration. Afterwards it goes from 64-127, so sure it is out of bounds. Still no nice solution though :/ EDIT: if(y < Chunk.chunkWidth && x < Chunk.chunkHeight) chunkArray[y][x] = map.getTileMap()[y][x]; This works for the first iteration... now I need to get the commonly accepted formula.

    Read the article

  • Handy Javascript array Extensions &ndash; contains(&hellip;)

    - by Liam McLennan
    This javascript adds a method to javascript arrays that returns a boolean indicating if the supplied object is an element of the array Array.prototype.contains = function(item) { for (var i = 0; i < this.length; i += 1) { if (this[i] === item) { return true; } } return false; }; To test alert([1,1,1,2,2,22,3,4,5,6,7,5,4].contains(2)); // true alert([1,1,1,2,2,22,3,4,5,6,7,5,4].contains(99)); // false

    Read the article

  • When to use an Array vs When to use a Vector, when dealing with GameObjects?

    - by user32465
    I understand that from other answers, Arrays and Vectors are the best choices. Many on SE claim that Linked Lists and Maps are bad for video game programming. I understand that for the most part, I can use Arrays. However, I don't really understand exactly when to use Vectors over Arrays. Why even use Vectors? Wouldn't it be best if I simply always used an Array, that way I know how much memory my game needs? Specifically my game would only ever load a single "Map" area of tiles, such as Map[100][100], so I could very easily have an array of GameObjectContainer GameObjects[100][100], which would reserve an entire map's worth of possible gameobjects, correct? So why use a Vector instead? Memory is quite large on modern hardware.

    Read the article

  • Initialize array in O(1) -- how is this trick called?

    - by user946850
    There is this pattern that trades performance of array access against the need to iterate it when clearing it. You keep a generation counter with each entry, and also a global generation counter. The "clear" operation increases the generation counter. On each access, you compare local vs. global generation counters; if they differ, the array has been reset. This has come up in StackOverflow recently, but I don't remember if this trick has an official name. Does it? One use case is Dijkstra's algorithm if only a tiny subset of the nodes has to be relaxed, and if this has to be done repeatedly.

    Read the article

  • How to implement dynamic binary search for search and insert operations of n element (C or C++)

    - by iecut
    The idea is to use multiple arrays, each of length 2^k, to store n elements, according to binary representation of n.Each array is sorted and different arrays are not ordered in any way. In the above mentioned data structure, SEARCH is carried out by a sequence of binary search on each array. INSERT is carried out by a sequence of merge of arrays of the same length until an empty array is reached. More Detail: Lets suppose we have a vertical array of length 2^k and to each node of that array there attached horizontal array of length 2^k. That is, to the first node of vertical array, a horizontal array of length 2^0=1 is connected,to the second node of vertical array, a horizontal array of length 2^1= 2 is connected and so on. So the insert is first carried out in the first horizontal array, for the second insert the first array becomes empty and second horizontal array is full with 2 elements, for the third insert 1st and 2nd array horiz. array are filled and so on. I implemented the normal binary search for search and insert as follows: int main() { int a[20]= {0}; int n, i, j, temp; int *beg, *end, *mid, target; printf(" enter the total integers you want to enter (make it less then 20):\n"); scanf("%d", &n); if (n = 20) return 0; printf(" enter the integer array elements:\n" ); for(i = 0; i < n; i++) { scanf("%d", &a[i]); } // sort the loaded array, binary search! for(i = 0; i < n-1; i++) { for(j = 0; j < n-i-1; j++) { if (a[j+1] < a[j]) { temp = a[j]; a[j] = a[j+1]; a[j+1] = temp; } } } printf(" the sorted numbers are:"); for(i = 0; i < n; i++) { printf("%d ", a[i]); } // point to beginning and end of the array beg = &a[0]; end = &a[n]; // use n = one element past the loaded array! // mid should point somewhere in the middle of these addresses mid = beg += n/2; printf("\n enter the number to be searched:"); scanf("%d",&target); // binary search, there is an AND in the middle of while()!!! while((beg <= end) && (*mid != target)) { // is the target in lower or upper half? if (target < *mid) { end = mid - 1; // new end n = n/2; mid = beg += n/2; // new middle } else { beg = mid + 1; // new beginning n = n/2; mid = beg += n/2; // new middle } } // find the target? if (*mid == target) { printf("\n %d found!", target); } else { printf("\n %d not found!", target); } getchar(); // trap enter getchar(); // wait return 0; } Could anyone please suggest how to modify this program or a new program to implement dynamic binary search that works as explained above!!

    Read the article

  • Help with refactoring PHP code

    - by Richard Knop
    I had some troubles implementing Lawler's algorithm but thanks to SO and a bounty of 200 reputation I finally managed to write a working implementation: http://stackoverflow.com/questions/2466928/lawlers-algorithm-implementation-assistance I feel like I'm using too many variables and loops there though so I am trying to refactor the code. It should be simpler and shorter yet remain readable. Does it make sense to make a class for this? Any advice or even help with refactoring this piece of code is welcomed: <?php /* * @name Lawler's algorithm PHP implementation * @desc This algorithm calculates an optimal schedule of jobs to be * processed on a single machine (in reversed order) while taking * into consideration any precedence constraints. * @author Richard Knop * */ $jobs = array(1 => array('processingTime' => 2, 'dueDate' => 3), 2 => array('processingTime' => 3, 'dueDate' => 15), 3 => array('processingTime' => 4, 'dueDate' => 9), 4 => array('processingTime' => 3, 'dueDate' => 16), 5 => array('processingTime' => 5, 'dueDate' => 12), 6 => array('processingTime' => 7, 'dueDate' => 20), 7 => array('processingTime' => 5, 'dueDate' => 27), 8 => array('processingTime' => 6, 'dueDate' => 40), 9 => array('processingTime' => 3, 'dueDate' => 10)); // precedence constrainst, i.e job 2 must be completed before job 5 etc $successors = array(2=>5, 7=>9); $n = count($jobs); $optimalSchedule = array(); for ($i = $n; $i >= 1; $i--) { // jobs not required to precede any other job $arr = array(); foreach ($jobs as $k => $v) { if (false === array_key_exists($k, $successors)) { $arr[] = $k; } } // calculate total processing time $totalProcessingTime = 0; foreach ($jobs as $k => $v) { if (true === array_key_exists($k, $arr)) { $totalProcessingTime += $v['processingTime']; } } // find the job that will go to the end of the optimal schedule array $min = null; $x = 0; $lastKey = null; foreach($arr as $k) { $x = $totalProcessingTime - $jobs[$k]['dueDate']; if (null === $min || $x < $min) { $min = $x; $lastKey = $k; } } // add the job to the optimal schedule array $optimalSchedule[$lastKey] = $jobs[$lastKey]; // remove job from the jobs array unset($jobs[$lastKey]); // remove precedence constraint from the successors array if needed if (true === in_array($lastKey, $successors)) { foreach ($successors as $k => $v) { if ($lastKey === $v) { unset($successors[$k]); } } } } // reverse the optimal schedule array and preserve keys $optimalSchedule = array_reverse($optimalSchedule, true); // add tardiness to the array $i = 0; foreach ($optimalSchedule as $k => $v) { $optimalSchedule[$k]['tardiness'] = 0; $j = 0; foreach ($optimalSchedule as $k2 => $v2) { if ($j <= $i) { $optimalSchedule[$k]['tardiness'] += $v2['processingTime']; } $j++; } $i++; } echo '<pre>'; print_r($optimalSchedule); echo '</pre>';

    Read the article

  • Delphi: EInvalidOp in neural network class (TD-lambda)

    - by user89818
    I have the following draft for a neural network class. This neural network should learn with TD-lambda. It is started by calling the getRating() function. But unfortunately, there is an EInvalidOp (invalid floading point operation) error after about 1000 iterations in the following lines: neuronsHidden[j] := neuronsHidden[j]+neuronsInput[t][i]*weightsInput[i][j]; // input -> hidden weightsHidden[j][k] := weightsHidden[j][k]+LEARNING_RATE_HIDDEN*tdError[k]*eligibilityTraceOutput[j][k]; // adjust hidden->output weights according to TD-lambda Why is this error? I can't find the mistake in my code :( Can you help me? Thank you very much in advance! unit uNeuronalesNetz; interface uses Windows, Messages, SysUtils, Variants, Classes, Graphics, Controls, Forms, Dialogs, ExtCtrls, StdCtrls, Grids, Menus, Math; const NEURONS_INPUT = 43; // number of neurons in the input layer NEURONS_HIDDEN = 60; // number of neurons in the hidden layer NEURONS_OUTPUT = 1; // number of neurons in the output layer NEURONS_TOTAL = NEURONS_INPUT+NEURONS_HIDDEN+NEURONS_OUTPUT; // total number of neurons in the network MAX_TIMESTEPS = 42; // maximum number of timesteps possible (after 42 moves: board is full) LEARNING_RATE_INPUT = 0.25; // in ideal case: decrease gradually in course of training LEARNING_RATE_HIDDEN = 0.15; // in ideal case: decrease gradually in course of training GAMMA = 0.9; LAMBDA = 0.7; // decay parameter for eligibility traces type TFeatureVector = Array[1..43] of SmallInt; // definition of the array type TFeatureVector TArtificialNeuralNetwork = class // definition of the class TArtificialNeuralNetwork private // GENERAL SETTINGS START learningMode: Boolean; // does the network learn and change its weights? // GENERAL SETTINGS END // NETWORK CONFIGURATION START neuronsInput: Array[1..MAX_TIMESTEPS] of Array[1..NEURONS_INPUT] of Extended; // array of all input neurons (their values) for every timestep neuronsHidden: Array[1..NEURONS_HIDDEN] of Extended; // array of all hidden neurons (their values) neuronsOutput: Array[1..NEURONS_OUTPUT] of Extended; // array of output neurons (their values) weightsInput: Array[1..NEURONS_INPUT] of Array[1..NEURONS_HIDDEN] of Extended; // array of weights: input->hidden weightsHidden: Array[1..NEURONS_HIDDEN] of Array[1..NEURONS_OUTPUT] of Extended; // array of weights: hidden->output // NETWORK CONFIGURATION END // LEARNING SETTINGS START outputBefore: Array[1..NEURONS_OUTPUT] of Extended; // the network's output value in the last timestep (the one before) eligibilityTraceHidden: Array[1..NEURONS_INPUT] of Array[1..NEURONS_HIDDEN] of Array[1..NEURONS_OUTPUT] of Extended; // array of eligibility traces: hidden layer eligibilityTraceOutput: Array[1..NEURONS_TOTAL] of Array[1..NEURONS_TOTAL] of Extended; // array of eligibility traces: output layer reward: Array[1..MAX_TIMESTEPS] of Array[1..NEURONS_OUTPUT] of Extended; // the reward value for all output neurons in every timestep tdError: Array[1..NEURONS_OUTPUT] of Extended; // the network's error value for every single output neuron t: Byte; // current timestep cyclesTrained: Integer; // number of cycles trained so far (learning rates could be decreased accordingly) last50errors: Array[1..50] of Extended; // LEARNING SETTINGS END public constructor Create; // create the network object and do the initialization procedure UpdateEligibilityTraces; // update the eligibility traces for the hidden and output layer procedure tdLearning; // learning algorithm: adjust the network's weights procedure ForwardPropagation; // propagate the input values through the network to the output layer function getRating(state: TFeatureVector; explorative: Boolean): Extended; // get the rating for a given state (feature vector) function HyperbolicTangent(x: Extended): Extended; // calculate the hyperbolic tangent [-1;1] procedure StartNewCycle; // start a new cycle with everything set to default except for the weights procedure setLearningMode(activated: Boolean=TRUE); // switch the learning mode on/off procedure setInputs(state: TFeatureVector); // transfer the given feature vector to the input layer (set input neurons' values) procedure setReward(currentReward: SmallInt); // set the reward for the current timestep (with learning then or without) procedure nextTimeStep; // increase timestep t function getCyclesTrained(): Integer; // get the number of cycles trained so far procedure Visualize(imgHidden: Pointer); // visualize the neural network's hidden layer end; implementation procedure TArtificialNeuralNetwork.UpdateEligibilityTraces; var i, j, k: Integer; begin // how worthy is a weight to be adjusted? for j := 1 to NEURONS_HIDDEN do begin for k := 1 to NEURONS_OUTPUT do begin eligibilityTraceOutput[j][k] := LAMBDA*eligibilityTraceOutput[j][k]+(neuronsOutput[k]*(1-neuronsOutput[k]))*neuronsHidden[j]; for i := 1 to NEURONS_INPUT do begin eligibilityTraceHidden[i][j][k] := LAMBDA*eligibilityTraceHidden[i][j][k]+(neuronsOutput[k]*(1-neuronsOutput[k]))*weightsHidden[j][k]*neuronsHidden[j]*(1-neuronsHidden[j])*neuronsInput[t][i]; end; end; end; end; procedure TArtificialNeuralNetwork.setReward; VAR i: Integer; begin for i := 1 to NEURONS_OUTPUT do begin // +1 = player A wins // 0 = draw // -1 = player B wins reward[t][i] := currentReward; end; end; procedure TArtificialNeuralNetwork.tdLearning; var i, j, k: Integer; begin if learningMode then begin for k := 1 to NEURONS_OUTPUT do begin if reward[t][k] = 0 then begin tdError[k] := GAMMA*neuronsOutput[k]-outputBefore[k]; // network's error value when reward is 0 end else begin tdError[k] := reward[t][k]-outputBefore[k]; // network's error value in the final state (reward received) end; for j := 1 to NEURONS_HIDDEN do begin weightsHidden[j][k] := weightsHidden[j][k]+LEARNING_RATE_HIDDEN*tdError[k]*eligibilityTraceOutput[j][k]; // adjust hidden->output weights according to TD-lambda for i := 1 to NEURONS_INPUT do begin weightsInput[i][j] := weightsInput[i][j]+LEARNING_RATE_INPUT*tdError[k]*eligibilityTraceHidden[i][j][k]; // adjust input->hidden weights according to TD-lambda end; end; end; end; end; procedure TArtificialNeuralNetwork.ForwardPropagation; var i, j, k: Integer; begin for j := 1 to NEURONS_HIDDEN do begin neuronsHidden[j] := 0; for i := 1 to NEURONS_INPUT do begin neuronsHidden[j] := neuronsHidden[j]+neuronsInput[t][i]*weightsInput[i][j]; // input -> hidden end; neuronsHidden[j] := HyperbolicTangent(neuronsHidden[j]); // activation of hidden neuron j end; for k := 1 to NEURONS_OUTPUT do begin neuronsOutput[k] := 0; for j := 1 to NEURONS_HIDDEN do begin neuronsOutput[k] := neuronsOutput[k]+neuronsHidden[j]*weightsHidden[j][k]; // hidden -> output end; neuronsOutput[k] := HyperbolicTangent(neuronsOutput[k]); // activation of output neuron k end; end; procedure TArtificialNeuralNetwork.setLearningMode; begin learningMode := activated; end; constructor TArtificialNeuralNetwork.Create; var i, j, k: Integer; begin inherited Create; Randomize; // initialize random numbers generator learningMode := TRUE; cyclesTrained := -2; // only set to -2 because it will be increased twice in the beginning StartNewCycle; for j := 1 to NEURONS_HIDDEN do begin for k := 1 to NEURONS_OUTPUT do begin weightsHidden[j][k] := abs(Random-0.5); // initialize weights: 0 <= random < 0.5 end; for i := 1 to NEURONS_INPUT do begin weightsInput[i][j] := abs(Random-0.5); // initialize weights: 0 <= random < 0.5 end; end; for i := 1 to 50 do begin last50errors[i] := 0; end; end; procedure TArtificialNeuralNetwork.nextTimeStep; begin t := t+1; end; procedure TArtificialNeuralNetwork.StartNewCycle; var i, j, k, m: Integer; begin t := 1; // start in timestep 1 cyclesTrained := cyclesTrained+1; // increase the number of cycles trained so far for j := 1 to NEURONS_HIDDEN do begin neuronsHidden[j] := 0; for k := 1 to NEURONS_OUTPUT do begin eligibilityTraceOutput[j][k] := 0; outputBefore[k] := 0; neuronsOutput[k] := 0; for m := 1 to MAX_TIMESTEPS do begin reward[m][k] := 0; end; end; for i := 1 to NEURONS_INPUT do begin for k := 1 to NEURONS_OUTPUT do begin eligibilityTraceHidden[i][j][k] := 0; end; end; end; end; function TArtificialNeuralNetwork.getCyclesTrained; begin result := cyclesTrained; end; procedure TArtificialNeuralNetwork.setInputs; var k: Integer; begin for k := 1 to NEURONS_INPUT do begin neuronsInput[t][k] := state[k]; end; end; function TArtificialNeuralNetwork.getRating; begin setInputs(state); ForwardPropagation; result := neuronsOutput[1]; if not explorative then begin tdLearning; // adjust the weights according to TD-lambda ForwardPropagation; // calculate the network's output again outputBefore[1] := neuronsOutput[1]; // set outputBefore which will then be used in the next timestep UpdateEligibilityTraces; // update the eligibility traces for the next timestep nextTimeStep; // go to the next timestep end; end; function TArtificialNeuralNetwork.HyperbolicTangent; begin if x > 5500 then // prevent overflow result := 1 else result := (Exp(2*x)-1)/(Exp(2*x)+1); end; end.

    Read the article

< Previous Page | 243 244 245 246 247 248 249 250 251 252 253 254  | Next Page >