Search Results

Search found 13692 results on 548 pages for 'bad practices'.

Page 387/548 | < Previous Page | 383 384 385 386 387 388 389 390 391 392 393 394  | Next Page >

  • How do I design a cryptographic hash function?

    - by Eyal
    After reading the following about why one-way hash functions are one-way, I would like to know how to design a hash function. http://stackoverflow.com/questions/1038307/help-me-better-understand-cryptographic-hash-functions/1047106#1047106 Before everyone gets on my case: Yes, I know that it's a bad idea to not use a proven and tested hash function. I would still like to know how it's done. I'm familiar with Feistel-network ciphers but those are necessarily reversible, horrible for a cryptographic hash. Is there some sort of construction that is well-used in cryptographic hashing? Something that makes it very one-way?

    Read the article

  • Open Source Code Integrity - How does quality assurance work?

    - by rockinthesixstring
    I've thought about this before and this topic has often steered me away from Open Source projects. Recently DotNetPanel has changed it's name to WebSitePanel and gone Open Source. The rumor mill is speculating that Microsoft is behind this. My question (in multi-part) is quite simple. Can somebody please explain to me how quality assurance works on Open Source projects? How can a closed application get "only better" when Open Source? Doesn't the "too many cooks in the kitchen" theory apply when too many developers contribute (possibly bad) code to a project?

    Read the article

  • Best way to structure AJAX for a Zend Framework application

    - by John Nall
    Sorry, but there's a lot of outdated and just plain bad information for Zend Framework, since it has changed so much over the years and is so flexible. I thought of having an AJAX module service layer, with controllers and actions that interact with my model. Easy, but not very extensible and would violate DRY. If I change the logistics of some process I'll have to edit the AJAX controllers and the normal controllers. So ideally I would load the exact same actions for both javascript and non-javascript users. I have thought about maybe checking for $_POST['ajax'], if it is set I would load a different (json'y) view for the data. Was wondering how/a good way to do this (front controller plugin I imagine?) or if someone can point me to an UP TO DATE tutorial that describes a really good way for building a larger ajax application. thx

    Read the article

  • Freelancer's problem and legal actions

    - by user198003
    Hi, Last year, I worked as a free lancer on one project. Unfortunately, person I worked for, decided that he is not happy about my work, and he decided to fired me. After 7 months, he called me, and ask to me return "his" money. In any other case, he will sue me. His version is that I have to give him 1500 euros, but my version is that he own me another 1500. I have no contract, only emails and Excel file with counted hours. What do I have to do? I don't want to give him back 1500, because it was my work, and his bad management. Also, I do not want my 1500, because I think it's not fair from my side. What should I do?

    Read the article

  • Objectdatasource and Gridview : Sorting, paging, filtering

    - by Simon
    Hi there, Im using entity framework 1.0 and trying to feed out a Gridview with a objectdatasource that have access to my facade. The problem is, that it seems to be particulary difficult and haven't seen anything that realy do what i want it to do on the internet. For those who know, a gridview feeded with an objectdatasource, it can't sort automaticaly then you must do it manually. It's not that bad. Where it becomes a nightmare, its when we add paging and filter settings to a gridview's datasource. After many hours searching on the internet, i'm asking you, guys, if anyone knows a link that can explain me how to mix Pagging, Sorting and filtering for a gridview and an objectdatasource! Thanks in advance and sorry for my english.

    Read the article

  • Strange: Planner takes decision with lower cost, but (very) query long runtime

    - by S38
    Facts: PGSQL 8.4.2, Linux I make use of table inheritance Each Table contains 3 million rows Indexes on joining columns are set Table statistics (analyze, vacuum analyze) are up-to-date Only used table is "node" with varios partitioned sub-tables Recursive query (pg = 8.4) Now here is the explained query: WITH RECURSIVE rows AS ( SELECT * FROM ( SELECT r.id, r.set, r.parent, r.masterid FROM d_storage.node_dataset r WHERE masterid = 3533933 ) q UNION ALL SELECT * FROM ( SELECT c.id, c.set, c.parent, r.masterid FROM rows r JOIN a_storage.node c ON c.parent = r.id ) q ) SELECT r.masterid, r.id AS nodeid FROM rows r QUERY PLAN ----------------------------------------------------------------------------------------------------------------------------------------------------------------- CTE Scan on rows r (cost=2742105.92..2862119.94 rows=6000701 width=16) (actual time=0.033..172111.204 rows=4 loops=1) CTE rows -> Recursive Union (cost=0.00..2742105.92 rows=6000701 width=28) (actual time=0.029..172111.183 rows=4 loops=1) -> Index Scan using node_dataset_masterid on node_dataset r (cost=0.00..8.60 rows=1 width=28) (actual time=0.025..0.027 rows=1 loops=1) Index Cond: (masterid = 3533933) -> Hash Join (cost=0.33..262208.33 rows=600070 width=28) (actual time=40628.371..57370.361 rows=1 loops=3) Hash Cond: (c.parent = r.id) -> Append (cost=0.00..211202.04 rows=12001404 width=20) (actual time=0.011..46365.669 rows=12000004 loops=3) -> Seq Scan on node c (cost=0.00..24.00 rows=1400 width=20) (actual time=0.002..0.002 rows=0 loops=3) -> Seq Scan on node_dataset c (cost=0.00..55001.01 rows=3000001 width=20) (actual time=0.007..3426.593 rows=3000001 loops=3) -> Seq Scan on node_stammdaten c (cost=0.00..52059.01 rows=3000001 width=20) (actual time=0.008..9049.189 rows=3000001 loops=3) -> Seq Scan on node_stammdaten_adresse c (cost=0.00..52059.01 rows=3000001 width=20) (actual time=3.455..8381.725 rows=3000001 loops=3) -> Seq Scan on node_testdaten c (cost=0.00..52059.01 rows=3000001 width=20) (actual time=1.810..5259.178 rows=3000001 loops=3) -> Hash (cost=0.20..0.20 rows=10 width=16) (actual time=0.010..0.010 rows=1 loops=3) -> WorkTable Scan on rows r (cost=0.00..0.20 rows=10 width=16) (actual time=0.002..0.004 rows=1 loops=3) Total runtime: 172111.371 ms (16 rows) (END) So far so bad, the planner decides to choose hash joins (good) but no indexes (bad). Now after doing the following: SET enable_hashjoins TO false; The explained query looks like that: QUERY PLAN ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- CTE Scan on rows r (cost=15198247.00..15318261.02 rows=6000701 width=16) (actual time=0.038..49.221 rows=4 loops=1) CTE rows -> Recursive Union (cost=0.00..15198247.00 rows=6000701 width=28) (actual time=0.032..49.201 rows=4 loops=1) -> Index Scan using node_dataset_masterid on node_dataset r (cost=0.00..8.60 rows=1 width=28) (actual time=0.028..0.031 rows=1 loops=1) Index Cond: (masterid = 3533933) -> Nested Loop (cost=0.00..1507822.44 rows=600070 width=28) (actual time=10.384..16.382 rows=1 loops=3) Join Filter: (r.id = c.parent) -> WorkTable Scan on rows r (cost=0.00..0.20 rows=10 width=16) (actual time=0.001..0.003 rows=1 loops=3) -> Append (cost=0.00..113264.67 rows=3001404 width=20) (actual time=8.546..12.268 rows=1 loops=4) -> Seq Scan on node c (cost=0.00..24.00 rows=1400 width=20) (actual time=0.001..0.001 rows=0 loops=4) -> Bitmap Heap Scan on node_dataset c (cost=58213.87..113214.88 rows=3000001 width=20) (actual time=1.906..1.906 rows=0 loops=4) Recheck Cond: (c.parent = r.id) -> Bitmap Index Scan on node_dataset_parent (cost=0.00..57463.87 rows=3000001 width=0) (actual time=1.903..1.903 rows=0 loops=4) Index Cond: (c.parent = r.id) -> Index Scan using node_stammdaten_parent on node_stammdaten c (cost=0.00..8.60 rows=1 width=20) (actual time=3.272..3.273 rows=0 loops=4) Index Cond: (c.parent = r.id) -> Index Scan using node_stammdaten_adresse_parent on node_stammdaten_adresse c (cost=0.00..8.60 rows=1 width=20) (actual time=4.333..4.333 rows=0 loops=4) Index Cond: (c.parent = r.id) -> Index Scan using node_testdaten_parent on node_testdaten c (cost=0.00..8.60 rows=1 width=20) (actual time=2.745..2.746 rows=0 loops=4) Index Cond: (c.parent = r.id) Total runtime: 49.349 ms (21 rows) (END) - incredibly faster, because indexes were used. Notice: Cost of the second query ist somewhat higher than for the first query. So the main question is: Why does the planner make the first decision, instead of the second? Also interesing: Via SET enable_seqscan TO false; i temp. disabled seq scans. Than the planner used indexes and hash joins, and the query still was slow. So the problem seems to be the hash join. Maybe someone can help in this confusing situation? thx, R.

    Read the article

  • Display PDF in Html

    - by anil
    Hi, i want to show PDF in a view in MVC, following function return file public ActionResult TakeoffPlans(string projID) { Highmark.BLL.Models.Project proj = GetProject(projID); List ff = proj.GetFiles(Project_Thin.Folders.CompletedTakeoff, false); ViewData["HasFile"] = "0"; if (ff != null && ff.Count 0 && ff.Where(p = p.FileExtension == "pdf").Count() 0) { ViewData["HasFile"] = "1"; } ViewData["ProjectID"] = projID; ViewData["Folder"] = Project_Thin.Folders.CompletedTakeoff; //return View("UcRenderPDF"); string fileName = Server.MapPath("~/Content/Project List Update 2.pdf"); return File(fileName, "application/pdf", Server.HtmlEncode(fileName)); } but it display some bad data in view, please help me on this

    Read the article

  • Re-factoring a CURL request to Ruby's RestClient

    - by user94154
    I'm having trouble translating this CURL request into Ruby using RestClient: system("curl --digest -u #{@user}:#{@pass} '#{@endpoint}/#{id}' --form image_file=@'#{path}' -X PUT") I keep getting 400 Bad Request errors. As far as I can tell, the request does get properly authenticated, but hangs up from the file upload part. Here are my best attempts, all of which get me those 400 errors: resource = RestClient::Resource.new "#{@endpoint}/#{id}", @user, @pass #attempt 1 resource.put :image_file => File.new(path, 'rb'), :content_type => 'image/jpg' #attempt 2 resource.put File.read(path), :content_type => 'image/jpg' #attempt 3 resource.put File.open(path) {|f| f.read}, :content_type => 'image/jpg'

    Read the article

  • Excluding files from web logs

    - by Ray
    Looking through my web logs, I see a lot of entries that don't interest me. Some of them are commonly used images, css files, and scripts, which I can easily exclude by un-checking the 'log visits' check box in IIS for the folder properties. I would also like to exclude log entries for certain common requests which are not in their own folders. Mostly, 'favicon.ico'. 'scriptresource.axd', and 'webresource.axd'. These (especially scriptresource.axd) make up almost a third of a typical log file on my site. So, the question is, how do I tell IIS not to log these requests? And is there any reason that this is a bad idea?

    Read the article

  • Anyone have any issues with using PLINQO and ASP.NET MVC 2.0?

    - by Chad
    I'm asking because I'm working on an ASP.NET MVC 1.0 site, thinking of upgrading to ASP.NET MVC 2.0. Then I read that PLINQO 5.0 was released (I had never heard of PLINQO before) and have been impressed with what PLINQO appears to be capable of. 1) Is PLINQO capable of building out an ASP.NET MVC 2.0 UI project when it's run? 2) Have you had any bad experiences using PLINQO (particularly in an ASP.NET MVC app)? Let me make sure I have the scenario right in my mind: Using PLINQO (assuming it supports ASP.NET MVC 2.0), I should be able to point it to my DB and it will create 3 projects: data, test, and mvc 2.0 UI? The data would contain LINQ to SQL queries, with the PLINQO extensions added in and the other projects setup to use the data project by default?

    Read the article

  • Better way to clean this messy bool method

    - by Luís Custódio
    I'm reading Fowler Clean Code book and I think that my code is a little messy, I want some suggestions: I have a simple business requirement that is return the date of new execution of my Thread. I've two class fields: _hour and _day. If actual day is higher than my _day field I must return true, so I'll add a month to "executionDate" If the day is the same, but the actual hour is higher than _hour I should return true too. So I did this simple method: private bool ScheduledDateGreaterThanCurrentDate (DateTime dataAtual) { if (dateActual.Day > _day) { return true; } if (dateActual.Day == _day && dateActual.Hour > _hour) { return true; } if (dateActual.Day == _day && dateActual.Hour == _hour) if (dateActual.Minute>0 || dateActual.Second>0) return true; return false; } I'm programming with TDD, so I know that the return is correct, but this is bad maintain code right?

    Read the article

  • Including inline javascript using content_for in rails

    - by TenJack
    I am using content_for and yeild to inject javascript files into the bottom of my layout but am wondering what the best practice is for including inline javascript. Specifically I'm wondering where the put the script type declaration: <% content_for :javascript do %> <script type="text/javascript"> ... </script> <% end %> or <% content_for :javascript do %> ... <% end %> <script type="text/javascript"> <%= yield :javascript %> </script> <% end %> I am using the first option now and wondering if it is bad to include multiple ... declarations within one view. Sometimes I have partials that lead to this.

    Read the article

  • Routinely sync a branch to master using git rebase

    - by m1755
    I have a Git repository with a branch that hardly ever changes (nobody else is contributing to it). It is basically the master branch with some code and files stripped out. Having this branch around makes it easy for me to package up a leaner version of my project without having to strip out the code and files manually every time. I have been using git rebase to keep this branch up to date with the master but I always get this warning when I try to push the branch after rebasing: To prevent you from losing history, non-fast-forward updates were rejected Merge the remote changes before pushing again. See the 'Note about fast-forwards' section of 'git push --help' for details. I then use git push --force and it works but I feel like this is probably bad practice. I want to keep this branch "in sync" with the master quickly and easily. Is there a better way of handling this task?

    Read the article

  • Will JavaScript evaluate a property's value if it's not part of an assignment statement?

    - by Bungle
    I've come across a fairly obscure problem having to do with measuring a document's height before the styling has been applied by the browser. More information here: http://sonspring.com/journal/jquery-iframe-sizing http://ajaxian.com/archives/safari-3-onload-firing-and-bad-timing In Dave Hyatt's comment from June 27, he advises simply checking the document.body.offsetLeft property to force Safari to do a reflow. Can I simply use the following statement: document.body.offsetLeft; or do I need to assign it to a variable, e.g.: var force_layout = document.body.offsetLeft; in order for browsers to calculate that value? I think this comes down to a more fundamental question - that is, without being part of an assignment statement, will JavaScript still evaluate a property's value? Does this depend on whether a particular browser's JavaScript engine optimizes the code?

    Read the article

  • What is the main purpose and sense to have staging server the same as production?

    - by truthseeker
    Hi, In our company we have staging and production servers. I'm trying to have them in state 1:1 after latest release. We've got web application running on several host and many instances of it. The issue is that I am an advocate of having the same architecture (structure) of web applications on staging and production servers to easily test new features and avoid creating of new bugs with new releases. But not everyone agree with me, and for them is not a such big deal to have different connection between staging application instances. Even maybe to have more application and connections between application on staging than on production server. I would like to ask about pros and cons of such an approach? I mean some good points to agree with me, or some bad why maybe i don't have right. Some examples of consequences and so forth.

    Read the article

  • Should boost library be dependent on structure member alignments?

    - by Sorin Sbarnea
    I found, the hard way, that at least boost::program_options is dependent of the compiler configured structure member alignment. If you build boost using default settings and link it with a project using 4 bytes alignment (/Zp4) it will fail at runtime (made a minimal test with program_options). Boost will generate an assert indicating a possible bad calling convention but the real reason is the structure member alignment. Is there any way to prevent this? If the alignment makes the code incompatible shouldn't this be included in library naming?

    Read the article

  • How do I stall until a SharePoint List Item is Deleted with SPLongOperation?

    - by ccomet
    I have a workflow, which creates a task and deletes it after the task is edited and its useful information acquired. I created a custom edit form for the task, so I have an SPLongOperation that I can use to stall the page. This is necessary, because if I don't stall the page in some fashion, the person will see the task in the task list for the minute moment before the workflow gets to delete the task, and that is bad. So some code to stall the page until the task is fully deleted is necessary. I have currently implemented a solution for this, but I am unsatisfied with the approach. It basically is summed up to a while loop that calls SPList.GetItemById until it throws an error. Delibrately attempting to cause an error doesn't sit well with me, but I cannot think of a faster method for checking this. I'm looking for alternatives that would preferably work faster if not as fast, and preferably without relying on catching exceptions. Thank you in advance!

    Read the article

  • firefox displaying .swf with wrong dimensions

    - by John Leonard
    I can't figure this one out. I have a flex app with fixed dimensions (950 x 700). I'm compiling the swf and using the default wrapper that flex generates. in some instances of Firefox, the dimensions are wrong. On top of that, with different domains, I get the .swf rendered with different dimensions (both wrong). Firefox 3.6.3 on one computer works fine. On another, the dimensions are off. Swf example and if you don't see a clipped swf...here are the screenshots. Rendered incorrectly on one domain And rendered in another domain with different bad dimensions. I'm at a total loss. When I embed the .swf with swfobject, I get the same issue. In the example you see, there's nothing in the app except for a single label centered on the stage.

    Read the article

  • VBA Macro On Timer style to run code every set number of seconds, i.e. 120 seconds

    - by FinancialRadDeveloper
    I have a need to run a piece of code every 120 seconds. I am looking for an easy way to do this in VBA. I know that it would be possible to get the timer value from the Auto_Open event to prevent having to use a magic number, but I can't quite get how to fire off a timer to get something to run every 120 seconds. I don't really want to use an infinite loop with a sleep if I can avoid it. EDIT: Cross-post based on an answer provided is at: http://stackoverflow.com/questions/2341762/excel-vba-application-ontime-i-think-its-a-bad-idea-to-use-this-thoughts-eit

    Read the article

  • Voting UI for showing like/dislike community comments side-by-side

    - by Justin Grant
    We want to add a comments/reviews feature to our website's plugin gallery, so users can vote up/down a particular plugin and leave an optional short comment about what they liked or didn't like about it. I'm looking for inspiration, ideally a good implementation elsewhere on the web which isn't annoying to end users, isn't impossibly complex to develop, and which enables users to see both good and bad comments side-by-side, like this: Like: 57 votes Dislike: 8 votes --------------------------------- -------------------------------- "great plugin, saved me hours..." "hard to install" "works well on MacOS and Ubuntu" "Broken on Windows Vista with UAC enabled" "integrates well with version 3.2" More... More... Anyone know a site which does something like this?

    Read the article

  • sql server 2005 reporting services-- how to use multiple datasets in report

    - by larryq
    Hi everyone, I'm new to SQL Server reporting services, and am trying to decipher an existing report. It's nothing too bad, but I notice it does have two report datasets defined. (They are generated via separate stored procedures) I'm trying to figure out where and how the report datasets are linked together so the Fields collection has both sets of columns available and the report has a single rowset to traverse. Is there a section in the report layout where a joining of datasets is defined? I'm using Visual Studio 2005 to design and preview the report fwiw. Thanks for your help!

    Read the article

  • Where does complexity bloat from?

    - by AareP
    Many of our design decisions are based on our gut feeling about how to avoid complexity and bloating. Some of our complexity-fears are true, we have plenty of painful experience on throwing away deprecated code. Other times we learn that some particular task isn't really that complex as we though it to be. We notice for example that upkeeping 3000 lines of code in one file isn't that difficult... or that using special purpose "dirty flags" isn't really bad OO practice... or that in some cases it's more convenient to have 50 variables in one class that have 5 different classes with shared responsibilities... One friend has even stated that adding functions to the program isn't really adding complexity to your system. So, what do you think, where does bloated complexity creep from? Is it variable count, function count, code line count, code line count per function, or something else?

    Read the article

  • jQuery - improve/reduce my ipod-style dropdown code! - challenge?

    - by aSeptik
    Hi all guys! by keeping inspiration from this http://www.filamentgroup.com/examples/menus/ipod.php i have maked my own from scratch cause i have needed this smarty dropdown solution for a client, but more lightweight & efficient! so with a good cup of coffe in my hand i have maked this DEMO: http://jsbin.com/ufuga SOURCE: http://jsbin.com/ufuga/edit since this is a proof o concept, whould be nice to know, before port this into a plugin, what you think about it! is good, bad or can be improved or reduced in size!? i'm glad to share this code with you and whould be nice if you want give me any feedback! ;-) PS: work perfectly in IE6+, Firefox, Chrome, Opera and of course support the jQuery Theme Roller and have zero configuration steps! thank you guys!

    Read the article

  • Python os.path.join

    - by Jim
    Hello, I am trying to learn python and am making a program that will output a script. I want to use os.path.join but am pretty confused (I know I am very bad at scripting/programming) See, according to the docs ( http://docs.python.org/library/os.path.html ) if I say os.path.join('c:', 'sourcedir') I get C:sourcedir as it's output. According to the docs, this is normal (right?) But when I use the copytree command, Python will output it the desired way, for example import shutil src = os.path.join('c:', 'src') dst = os.path.join('c':', 'dst') shutil.copytree(src, dst) Here is the error code I get WindowsError: [Error 3] The system cannot find the path specified: 'C:src/.' If I wrap the os.path.join with os.path.normpath I get the same error If this os.path.join can't be used this way, then I am confused as to its purpose According to the pages suggested by Stack Overflow, slashes should not be used in join--that is correct I assume? Thanks guys(girls) for your help

    Read the article

  • How should I change my Graph structure (very slow insertion)?

    - by Nazgulled
    Hi, This program I'm doing is about a social network, which means there are users and their profiles. The profiles structure is UserProfile. Now, there are various possible Graph implementations and I don't think I'm using the best one. I have a Graph structure and inside, there's a pointer to a linked list of type Vertex. Each Vertex element has a value, a pointer to the next Vertex and a pointer to a linked list of type Edge. Each Edge element has a value (so I can define weights and whatever it's needed), a pointer to the next Edge and a pointer to the Vertex owner. I have a 2 sample files with data to process (in CSV style) and insert into the Graph. The first one is the user data (one user per line); the second one is the user relations (for the graph). The first file is quickly inserted into the graph cause I always insert at the head and there's like ~18000 users. The second file takes ages but I still insert the edges at the head. The file has about ~520000 lines of user relations and takes between 13-15mins to insert into the Graph. I made a quick test and reading the data is pretty quickly, instantaneously really. The problem is in the insertion. This problem exists because I have a Graph implemented with linked lists for the vertices. Every time I need to insert a relation, I need to lookup for 2 vertices, so I can link them together. This is the problem... Doing this for ~520000 relations, takes a while. How should I solve this? Solution 1) Some people recommended me to implement the Graph (the vertices part) as an array instead of a linked list. This way I have direct access to every vertex and the insertion is probably going to drop considerably. But, I don't like the idea of allocating an array with [18000] elements. How practically is this? My sample data has ~18000, but what if I need much less or much more? The linked list approach has that flexibility, I can have whatever size I want as long as there's memory for it. But the array doesn't, how am I going to handle such situation? What are your suggestions? Using linked lists is good for space complexity but bad for time complexity. And using an array is good for time complexity but bad for space complexity. Any thoughts about this solution? Solution 2) This project also demands that I have some sort of data structures that allows quick lookup based on a name index and an ID index. For this I decided to use Hash Tables. My tables are implemented with separate chaining as collision resolution and when a load factor of 0.70 is reach, I normally recreate the table. I base the next table size on this http://planetmath.org/encyclopedia/GoodHashTablePrimes.html. Currently, both Hash Tables hold a pointer to the UserProfile instead of duplication the user profile itself. That would be stupid, changing data would require 3 changes and it's really dumb to do it that way. So I just save the pointer to the UserProfile. The same user profile pointer is also saved as value in each Graph Vertex. So, I have 3 data structures, one Graph and two Hash Tables and every single one of them point to the same exact UserProfile. The Graph structure will serve the purpose of finding the shortest path and stuff like that while the Hash Tables serve as quick index by name and ID. What I'm thinking to solve my Graph problem is to, instead of having the Hash Tables value point to the UserProfile, I point it to the corresponding Vertex. It's still a pointer, no more and no less space is used, I just change what I point to. Like this, I can easily and quickly lookup for each Vertex I need and link them together. This will insert the ~520000 relations pretty quickly. I thought of this solution because I already have the Hash Tables and I need to have them, then, why not take advantage of them for indexing the Graph vertices instead of the user profile? It's basically the same thing, I can still access the UserProfile pretty quickly, just go to the Vertex and then to the UserProfile. But, do you see any cons on this second solution against the first one? Or only pros that overpower the pros and cons on the first solution? Other Solution) If you have any other solution, I'm all ears. But please explain the pros and cons of that solution over the previous 2. I really don't have much time to be wasting with this right now, I need to move on with this project, so, if I'm doing to do such a change, I need to understand exactly what to change and if that's really the way to go. Hopefully no one fell asleep reading this and closed the browser, sorry for the big testament. But I really need to decide what to do about this and I really need to make a change. P.S: When answering my proposed solutions, please enumerate them as I did so I know exactly what are you talking about and don't confuse my self more than I already am.

    Read the article

< Previous Page | 383 384 385 386 387 388 389 390 391 392 393 394  | Next Page >