Search Results

Search found 11553 results on 463 pages for 'bad programmer'.

Page 287/463 | < Previous Page | 283 284 285 286 287 288 289 290 291 292 293 294  | Next Page >

  • Limiting my heavy thinking to my job [closed]

    - by Robin Castlin
    This might be a weird problem which is only to a half relevant to actual programming, but hopefully there are people here that knows what I'm talking about. Basicly I'm proud of how I can deal with coding problems and fix them in short notice and many other aspects like building new systems and such. I'm fast on finding solutions and I often think about the impact my changes does to existing systems and so on, therefor preventing problem from arising at all and such. I am simply happy with how my mind operates when it comes to programming and I wouldn't want to change it at all. The problem, however is when I'm not programming. I find myself rather limited in social situations. I can't determine if it is through programming, but I sometimes think way to much about the consequences when it comes to being social. I know from own experience that most times you earn by not thinking about consequences, but it's hard for me not to. Often my friends tells me "I think too much" and even though I agree, I can't seem to change this behavior. My brain wants to think, and it likes to overthink simple stuff. Does anyone recognize the bad habit of not leaving advanced thinking at work, and in what way do you deal with it? If this isn't a suitable place to ask this question, I apologize and hope you may point me to the right site.

    Read the article

  • The (non) Importance of Language

    - by Eric A. Stephens
    Working with a variety of clients on EA initiatives one begins to realize that not everyone is a fan of EA. Specifically, they are not a fan of the "a-word". Some organizations have abused this term with creating and assigning the title to just about anyone who demonstrates above average prowess with a particular technology. Other organizations will assign the title to those managers left with no staff after a reorg. Some companies, unfortunately, have simply had a bad go of it with regard to EA...or any "A" for that matter. What we call "EA" is almost irrelevant. But what is not negotiable for those to succeed in business is to manage change. That is what EA is all about. I recall sitting in Zachman training led by himself. He posits the only organizations that don't need EA (or whatever you want to call it) are those that are not changing. My experience suggests those orgs that aren't changing aren't growing. And if you aren't growing, you're dying. Any EA program will not succeed unless there is a desire to change. No desire to change suggests the EA/Advisor/Change Agent should just walk the other way.

    Read the article

  • How to build a good service layer in ASP.NET?

    - by Swippen
    I have looked through some questions, technologies for building a good service layer but I have some questions regarding this that I need help with. First some information of what I have for requirements. We currently have a number of web applications that talk to each other in a spiderweb looking way (all talking to each other in a confusing way via webservices and database data). We want to change this so that all applications go through a service layer where we can work more with cache and encapsulate common functionality and more. We want this layer to also have a Web API so that 3rd party clients can consume information from the service. The problem I see is that if we build the service layer with say MVC4 Web API don't we need to communicate between the application using the webAPI meaning we have to construct URLs and consume JSON/Xml. That does not sound too effective. I assume a better method would be working with entities and WCF to communicate between the application but then we might loose the Web API magic? So the question is if there is a way to consume a service layer as both a Web API (JSON/XML) and as a more backend service layer with entities. If we are forced to use 2 different service layers we might have to duplicate some functionality and other bad things. Hope the question is clear enough and please ask if you need any more information.

    Read the article

  • Dealing with under performing co-worker

    - by PSU_Kardi
    I'm going to try to keep this topic as generic as I can so the question isn't closed as too specific or whatever. Anyway, let's get to it. I currently work on a somewhat small project with 15-20 developers. We recently hired a few new people because we had the hours and it synched up well with the schedule. It was refreshing to see hiring done this way and not just throwing hours & employees at a problem. Alas, I could argue the hiring process still isn't perfect but that's another story for another day. Anyway, one of these developers is really under performing. The developer is green and has a lot of bad habits. Comes in later than I do and leaving earlier than I am. This in and of itself isn't an issue, but the lack of quality work makes it become a bit frustrating. When giving out tasking the question is no longer, what can realistically be given but now becomes - How much of the work will we have to redo? So as the project goes on, I'm afraid this might cause issues with the schedule. The schedule could have been defined as a bit aggressive; however, given that this person is under performing it now in my mind goes from aggressive to potentially chaotic. Yes, one person shouldn't make or break a schedule and that in and of itself is an issue too but please let's ignore that for right now. What's the best way to deal with this? I'm not the boss, I'm not the project lead but I've been around for a while now and am not sure how to proceed. Complaining to management comes across as childish and doing nothing seems wrong. I'll ask the community for insight/advice/suggestions.

    Read the article

  • Helping to Reduce Page Compression Failures Rate

    - by Vasil Dimov
    When InnoDB compresses a page it needs the result to fit into its predetermined compressed page size (specified with KEY_BLOCK_SIZE). When the result does not fit we call that a compression failure. In this case InnoDB needs to split up the page and try to compress again. That said, compression failures are bad for performance and should be minimized.Whether the result of the compression will fit largely depends on the data being compressed and some tables and/or indexes may contain more compressible data than others. And so it would be nice if the compression failure rate, along with other compression stats, could be monitored on a per table or even on a per index basis, wouldn't it?This is where the new INFORMATION_SCHEMA table in MySQL 5.6 kicks in. INFORMATION_SCHEMA.INNODB_CMP_PER_INDEX provides exactly this helpful information. It contains the following fields: +-----------------+--------------+------+ | Field | Type | Null | +-----------------+--------------+------+ | database_name | varchar(192) | NO | | table_name | varchar(192) | NO | | index_name | varchar(192) | NO | | compress_ops | int(11) | NO | | compress_ops_ok | int(11) | NO | | compress_time | int(11) | NO | | uncompress_ops | int(11) | NO | | uncompress_time | int(11) | NO | +-----------------+--------------+------+ similarly to INFORMATION_SCHEMA.INNODB_CMP, but this time the data is grouped by "database_name,table_name,index_name" instead of by "page_size".So a query like SELECT database_name, table_name, index_name, compress_ops - compress_ops_ok AS failures FROM information_schema.innodb_cmp_per_index ORDER BY failures DESC; would reveal the most problematic tables and indexes that have the highest compression failure rate.From there on the way to improving performance would be to try to increase the compressed page size or change the structure of the table/indexes or the data being stored and see if it will have a positive impact on performance.

    Read the article

  • How to avoid the GameManager god object?

    - by lorancou
    I just read an answer to a question about structuring game code. It made me wonder about the ubiquitous GameManager class, and how it often becomes an issue in a production environment. Let me describe this. First, there's prototyping. Nobody cares about writing great code, we just try to get something running to see if the gameplay adds up. Then there's a greenlight, and in an effort to clean things up, somebody writes a GameManager. Probably to hold a bunch of GameStates, maybe to store a few GameObjects, nothing big, really. A cute, little, manager. In the peaceful realm of pre-production, the game is shaping up nicely. Coders have proper nights of sleep and plenty of ideas to architecture the thing with Great Design Patterns. Then production starts and soon, of course, there is crunch time. Balanced diet is long gone, the bug tracker is cracking with issues, people are stressed and the game has to be released yesterday. At that point, usually, the GameManager is a real big mess (to stay polite). The reason for that is simple. After all, when writing a game, well... all the source code is actually here to manage the game. It's easy to just add this little extra feature or bugfix in the GameManager, where everything else is already stored anyway. When time becomes an issue, no way to write a separate class, or to split this giant manager into sub-managers. Of course this is a classical anti-pattern: the god object. It's a bad thing, a pain to merge, a pain to maintain, a pain to understand, a pain to transform. What would you suggest to prevent this from happening?

    Read the article

  • Bayesian content filter for vbulletin [on hold]

    - by mc0e
    I've been tasked with coming up with a tool to automatically flag some posts for moderator attention on a large vbulletin forum. It's not spam per se, but the task has a lot in common with the sort of handling that might be done by a spam protection plugin (a mod in vbulletin speak). There's only so much I can say, but the task does not involve bad users, so much as particular kinds of posts which the moderators need to be aware of. Filtering out user registrations and links is therefore not useful, and we are talking about posts by real human users. What I'm looking for is an existing bayesian classification plugin, or something that I can study to get an understanding of how to do the vbulletin side of the interface in order to build such a thing. Ie I'd need ways for moderators to list flagged posts, and to correct the classification of posts which have been mis-classified. Ideally I want a 3 way split with an "unsure" category in order to reduce what has to be reviewed to find any mis-classifications. Any pointers? I've searched around a bit, and so far what I've found has been more or less entirely targetted at intervening in sign-ups (mostly using stopforumspam), captchas, and use of external services like akismet which are spam specific. I'm also considering an external solution, which might be ableto be interfaced i

    Read the article

  • Allow (and correct the URL) when there is a special character such as %26 using IIS and the rewrite module

    - by plumtreematt
    I'm struggling with a legacy app that uses special characters like %26 in the URL. The characters don't affect the app but can't be changed, so I'm trying to get IIS to deal with them. I've tried to ignore them using multiple methods, but nothing seems to work. So now I installed the IIS rewrite module and added a rewrite rule to web.config to replace the characters %26 with _, for example: <rewrite> <rules> <rule name="ampersand" patternSyntax="Wildcard" stopProcessing="true"> <match url="*%26*" /> <action type="Redirect" url="{R:1}_{R:2}" /> </rule> </rules> </rewrite> The problem is that IIS responds with "Bad Request" before the rewrite rule ever gets called. So my question is this: how can I change the order of precedence so that the mod rewrite filter will be called before IIS puts the ban hammer down on that URL?

    Read the article

  • Open Source Highlight: namebench

    - by eddraper
    DNS is a big deal.  Even small incremental changes to improve its performance can yield significant value due to the vast quantity of look-ups required when using the internet.  Until now, It’s always been one of those things I had to kinda take on faith… was my ISP doing a good job?  Are those public DNS server really that much faster?  What about security and privacy concerns? Let me introduce you to namebench.  This is the kinda tool I really love – one that immediately delivers value and is almost over-the-top OCD in its attention to detail. Trust me, this tool is utterly ruthless in it’s quest for getting it right – you’re not left with a big question mark after it presents its data.  The results are conclusive and actionable.  Here’s what is does: It hunts down the fastest DNS servers from your desktop that it can find using thousands of requests.  No, it doesn’t pop up this little dialog in 10 seconds to give you some “off the cuff” answer from a handful of providers.  It takes the better part of 10-15 minutes to run.  When it finishes, it presents you with a veritable horn-o-plenty of data.  Mean response duration, response distribution, bad data,  no stone is left un-turned. Check it out.  You’ll dig it.

    Read the article

  • Noise Canceling Earphones

    - by Mark Treadwell
    I travel a lot. The hours spent droning through the sky can be made more tolerable with an MP3 player and a set of noise-cancelling headphones. Reducing the sound of the airflow and engines is a great relief. For a year or two, I used a pair of folding Sony MDR-NC5 Noise Canceling Headphones, the ear foam covers self-destructed. I replaced them with old washcloth material and was happy, but the DW thought it looked bad.  I switched to a new set of Sony MDR-NC6 Noise Canceling Headphones.  These worked equally well, although they did not fold as small as the MDR-NC5 headphones. Over four years of use, the MDR-NC6 headphones started cutting out and making popping noises.  This was not surprising considering the beating they took on travel in my backpack.  It looked like I needed another new set. The older MDR-NC5 headphones were still on the shelf with the hated washcloth covers.  A quick search online showed a vibrant business in selling replacement ear foams, often at exorbitant prices.  Nowhere did I see ear foam covers made for the oblong MDR-NC5.  I then realized that foam is stretchable and that the shape should not matter.  After another search and some consideration, I purchased 2-5/16" foam pad ear covers that were able to stretch over the MDR-NC5's strange shape.  Problem solved for less than $5.

    Read the article

  • Standard/Compliance for web programming?

    - by MarkusK
    I am working with developers right now that write code the way they want and when i tell them to do it other way they respond that its just matter of preference how to do it and they have their way and i have mine. I am not talking about the formatting of code, but rather of way site is organized in classes and the way the utilize them. and the way they create functions and process forms etc. Their coding does not match my standards, but again they argue that its matter of preference and as long as goal achieved the can be different way's to do it. I agree but their way is proven to have bugs and we spend a lot of time going back and forth with them to fix all problems security or functionality, yet they still write same code no matter how many times i asked them to stop doing certain things. Now i am ready to dismiss them but friend of mine told me that he has same exact problem with freelance developers he work with. So i don't want to trade one bad apple for another. Question is is there some world wide (or at least europe and usa) accepted standard or compliance on how write secure web based applications. What application architecture should be for maintainable application. Is there are some general standard that can be used for any language ruby php or java govern security and functionality and quality of code? Or at least for PHP and MySQL i use for my website. So i can make them follow this strict standard and stop making excuses.

    Read the article

  • Leaving my ongoing internship for a better company

    - by AnonymousAsExpected
    Hi. I am currently employed as an intern with a software company X. I had no particular interest in working on their line of work, but since I had no other alternative, I had to join it. I had also applied to a good company Y, which has a better name than X and the work is exactly what I wanted. More than anything, the mention of Y on my resume will make a big difference later. I really had no idea that Y would accept me, since they were not replying to my repeated requests to inform me of the status of my application. Now fifteen days into my internship with X, Y sent me an offer letter to join. What should I do now? There are other interns working with X on the same project I was, and so of course the work won't suffer. My problem is, I want to quit X, but how do I do it in the most polite way possible? I am afraid to ask my boss, he is rude already and I fear he might take it badly. And how do I handle this in my resume? If two years down the line someone asks me why I left X for Y, won't that look bad? I am really confused, working with Y would be like a dream come true. But I fear the negative impact of leaving X. Hoping that someone can perhaps share a similar experience.

    Read the article

  • Hitching and Slowness Due to HDD Activity on Ubuntu, But Not Windows?

    - by Espionage724
    It's been bothering me for months now, but I've noticed in Ubuntu (or any distro of Linux I've tried), any major I/O activity will cause hitching and general slowness. For example, if I try doing a file transfer from my network computer to the computer I'm using and try moving the mouse after a while, it might not respond for a second or so. Similar incidents occur in other cases too (right-clicking to get a context menu takes a few seconds, hitting the drop-down application bar takes a while, etc). My HDD isn't top-notch (a WD Blue 500GB 7200RPM drive) but I don't recall it being nearly this bad in Windows 7, 8, or 8.1. CPU activity during file transfers is relatively low (less than 10-20% on all cores of a Phenom II X4 @ 3.3Ghz). I'm using Gnome System Monitor (on Xubuntu) and can't seem to see what kind of HDD activity is occurring though. I have 8GB of RAM too, which is moderately being used (2.5GB), but shouldn't be a problem either. Any ideas what's up? I've tried kernels between 3.8 and 3.11 (i'm using saucy currently with 3.11).

    Read the article

  • How to structure reading of commands given at a(n interactive) CLI prompt?

    - by Anto
    Let's say I have a program called theprogram (the marketing team was on strike when the product was to be named). I start that program by typing, perhaps not surprisingly, the program name as a command into a command prompt. After that, I get into a loop (from the users standpoint, an interactive command-line prompt), where one command will be read from the user, and depending on what command was given, the program will execute some instructions. I have been doing something like the following (in C-like pseudocode): main_loop{ in=read_input(); if(in=="command 1") do_something(); else if(in=="command 2") do_something_else(); ... } (In a real program, I would probably encapsulate more things into different procedures, this is just an example.) This works well for a small amount of commands, but let's say you have 100, 1000 or even 10 000 of them (the manual would be huge!). It is clearly a bad idea to have 10 000 ifs and else ifs after each other, for instance, the program would be hard to read, hard to maintain, contain a lot of boilerplate code... Yeah, you don't want to do that, so what approach would you recommend me to use (I will probably never use 10 000 commands in a program, but the solution should, at least preferably, be able to scale to that kind of massive (?) problems. The solution doesn't have to allow for arguments to the commands)?

    Read the article

  • 13.04 Temp Save, rt3290, Kernel downgrade

    - by user170534
    It's kind of a multiplying but I didn't wanted to open more than one topics. I'd have a fresh install of Ubuntu with tlp configured and using acpi_call to keep 7670M turned off. I was a short time arch user and with openbox and firefox it was about 60 to 70 degrees; wanted to turn to a stable release just for this reason. acpitz-virtual-0 Adapter: Virtual device temp1: +50.0°C radeon-pci-0100 Adapter: PCI adapter temp1: -128.0°C coretemp-isa-0000 Adapter: ISA adapter Physical id 0: +56.0°C (high = +87.0°C, crit = +105.0°C) Core 0: +54.0°C (high = +87.0°C, crit = +105.0°C) Core 1: +55.0°C (high = +87.0°C, crit = +105.0°C) The temperature is not seriously high yet can be lower. Another issue is my wifi card, rt3290: The rt2800pci module is fine and all that but the download performance is pretty bad alongside of annoying signal range and the prop. driver gives a conf error about some specific rt2860 code and integrales in pci_dev file. If I blank off the error making variables, module loads but the driver is unused. Since the driver is old, I was thinking of downgrading the kernel of raring to 3.7 or may be 3.6. Should &-or can I?

    Read the article

  • Will we be penalized for having multiple external links to the same site?

    - by merk
    There seem to be conflicting answers on this question. The most relevant ones seem to be at least a year or two old, so I thought it would be worth re-asking this question. My gut says it's ok, because there are plenty of sites out there that do this already. Every major retailer site usually has links to the manufacturer of whatever item they are selling. go to www.newegg.com and they have hundreds of links to the same site since they sell multiple items from the same brand. Our site allows people to list a specific genre of items for sale (not porn - i'm just keeping it generic since I'm not trying to advertise) and on each item listing page, we have a link back to their website if they want. Our SEO guy is saying this is really bad and google is going to treat us as a link farm. My gut says when we have to start limiting user useful features to our site to boost our ranking, then something is wrong. Or start jumping through hoops by trying to hide text using javascript etc Some clients are only selling 1 to a handful of items, while a couple of our bigger clients have hundreds of items listed so will have hundreds of pages that link back to their site. I should also mention, there will be a handful of pages with the bigger clients where it may appear they have duplicate pages, because they will be selling 2 or 3 of the same item, and the only difference in the content of the page might just be a stock #. The majority of the pages though will have unique content. So - will we be penalized in some way for having anywhere from a handful to a few hundred pages that all point to the same link? If we are penalized, what's the suggested way to handle this? We still want to give users the option to go to the clients site, and we would still like to give a link back to the clients site to help their own SE rankings.

    Read the article

  • Ubuntu 11.10 very slow compared to Windows

    - by Patrick
    I'm new to Ubuntu/linux. Since my PC is very old and not very fast with Windows 7, I decided to give Ubuntu a try, so I downloaded and installed Ubuntu 11.10 today. When I first started it, I had bad 800x600 resolution and it was very slow and annoying. So I installed a driver for my graphic card and now everything looks very nice (1280x1024).But I think it's still far slower than Windows 7. I tried to run in Ubuntu like a few people suggested on the forum but if I log in I get a black screen saying something like "this video mode cannot be displayed". I get that same screen when booting Ubuntu btw, but after about 15 seconds it disappears and just starts Ubuntu. I also installed other drivers for my graphic card but everything stayed the same. I noticed that i.e. when I open Firefox or system settings it takes about 5 seconds till it opens (while Windows 7 takes under 1 second to start i.e. Chrome) and when I do this my CPU usage gets to 100% for a short time. Computer specs: Memory: 2GB RAM Processor: Intel Pentium 4 2.8GHz Graphics Card: Nvidia GeForce 6800 400MHz. I read on various forums that 11.04 works flawless on many PCs, where 11.10 is very slow. Should I install 11.04 or could anybody help me with this problem?

    Read the article

  • Latest (5 June 14) Updates t0 10.04 Causing Multiple Problems

    - by user291780
    Apologies, the questions are very short, but the bkg isn't. I rec'd a routine notification from the update mgr a few days ago (I believe, June 5th). I took a look and there was lots of linux stuff, headers, etc., nothing obviously unusual. I'd rec'd and updated w/a more extensive pkg set, kernel and all a few weeks ago, no problem. On June 6, I pushed the upgrade button on the June 5th batch, nothing usual, needed a reboot, which I did after a full power down, it came up fine. Gedit worked, the calculator worked, started up firefox, it came up, selected the BookMarks menu, and blam, it hesitated and then greyed out, when I tried to close it, got the "process not responding" msg. Undaunted, I tried to fire up Google Chrome....nothing on the screen or process bar. I fired up the system monitor and indeed there were some "sleeping" chrome processes "running". Powered down several times, but the same problems persist. Similar but worse story when I tried to fire up one of my virtual machines, VirtualBox came up fine, but when I tried to start one of my virtual machines I got a progress popup that I'd never seen before which showed that we were making no progress past 20%. Uninstalled Oracle VirtualBox, reinstalled the latest and greatest, same result. Also, unable to logout, or shutdown once the virtual machine exhibited this behavior. Powered down manually, end of story. Never saw such a bad result after an update. I'm running Ubuntu 10.04 LTS (Lucid Lynx) as I have been for a number of years. Please don't reply with why don't you run some other version of Ubuntu, that doesn't answer the questions below. Questions: Will their be a subsequent update that fixes this, and if so, when? If not, is there a way for me to get back to where I was before this disaster?

    Read the article

  • System always halt

    - by user211964
    Good day, Thanks Bruno for the prompt response. First sorry for my bad writing. I'll try to clarify my problem. Now my system already update to version 13.10. The problem is my system always put on stop whenever there is no activity. Example: I open terminal then execute "sudo apt-get update" then I leave my laptop away, the update stop at 20%. After I move my mouse then the update continue. When I watch a movie using vlc. Play the movie and change to full screen. After a while like 30-40 seconds the movie pause, and again after I move my mouse or hit any button on my keyboard the movie continue to play. I downloading torrent file, a big file, I leave my laptop the whole night, the next morning the downloading just stop. the problem my laptop cannot be on idle. means when I downloading a file or updating my system I just cannot leave my laptop away. I have to kept my laptop busy like surfing the internet...playing games etc.

    Read the article

  • share code between check and process methods

    - by undu
    My job is to refactor an old library for GIS vector data processing. The main class encapsulates a collection of building outlines, and offers different methods for checking data consistency. Those checking functions have an optional parameter that allows to perform some process. For instance: std::vector<Point> checkIntersections(int process_mode = 0); This method tests if some building outlines are intersecting, and return the intersection points. But if you pass a non null argument, the method will modify the outlines to remove the intersection. I think it's pretty bad (at call site, a reader not familiar with the code base will assume that a method called checkSomething only performs a check and doesn't modifiy data) and I want to change this. I also want to avoid code duplication as check and process methods are mostly similar. So I was thinking to something like this: // a private worker std::vector<Point> workerIntersections(int process_mode = 0) { // it's the equivalent of the current checkIntersections, it may perform // a process depending on process_mode } // public interfaces for check and process std::vector<Point> checkIntersections() /* const */ { workerIntersections(0); } std::vector<Point> processIntersections(int process_mode /*I have different process modes*/) { workerIntersections(process_mode); } But that forces me to break const correctness as workerIntersections is a non-const method. How can I separate check and process, avoiding code duplication and keeping const-correctness?

    Read the article

  • How do I make a Live USB without destroying my Windows computer?

    - by user71089
    Unlike every other post I have read, let me begin by saying that I am TERRIFIED of destroying my Windows 7 home system in the attempt of making a bootable Ubuntu Thumb Drive. I very specifically DO NOT want to attempt configuring a dual boot system. What I am seeking is a portable operating system on a thumb drive, through which I can run my COREL software via VirtualBox-4.1.16-78094-Win. I want to be able to use my own software on my work computer without any worries about screwing up the host system anywhere I go. Supposedly, I just made a bootable Thumb drive, having successfully loaded the ISO for ubuntu-12.04-desktop-i386 using Pendrivelinux's Universal-USB-Installer-1.9.0.1 However, all that I get is a Thumb Drive that wants to install Unbutu onto my PC's HDD. I am not finding any clear path to this end in the posts I am reading. Like it or not, Windows is a fact of life for me. The goal is to be able to use my software on my work PC without doing anything intrusive that will cost me my job. If I have a meltdown on my PC at home trying to make this happen, it will be nearly as bad. Is this even Do-Able? Can the process be made clear? Thank You, Ubuntu World ...!

    Read the article

  • Customer Reviews on Company Listings [closed]

    - by GSTAR
    I'm not sure if this is the right place but I am after some general advice on a feature I am looking to implement on my website. The website I have is a Wedding Directory. Here, companies can advertise in the form of a directory listing. This listing contains the company details - such as what they do and how you can contact them. There are three packages available - Basic package is free and Silver and Gold packages are charged for. Now in order to further enhance the directory, I want to enable customer reviews for each listing. This is basically whereby customers who have used a particular company's service can write a review (and rate out of 5) of their experience. The dilemma I face is that if customers are to be paying for their listings then surely they will expect to not have content on their listing that would tarnish their reputation (i.e. negative reviews). But at the same time I want to be impartial and help my site visitors to make informed choices on which companies to book when arranging their weddings. Of course I will exercise my ability to remove offensive or fake reviews but I do not want to be removing reviews just to satisfy my clients. Suppose a client pays me a premium fee to have their listing on my front page and at the top of the listings. Now suppose that client gets lost of negative reviews - the fact that they have increased visibility means that they will also get increased bad publicity. This in turn means they won't renew their package and will most likely request the listing to be taken down. So, how can I get the right balance here and keep everyone happy?

    Read the article

  • How should I implement Transaction database EJB 3.0

    - by JamesBoyZ
    In the CustomerTransactions entity, I have the following field to record what the customer bought: @ManyToMany private List<Item> listOfItemsBought; When I think more about this field, there's a chance it may not work because merchants are allowed to change item's information (e.g. price, discount, etc...). Hence, this field will not be able to record what the customer actually bought when the transaction occurred. At the moment, I can only think of 2 ways to make it work. I will record the transaction details into a String field. I feel that this way would be messy if I need to extract some information about the transaction later on. Whenever the merchant changes an item's information, I will not update directly to that item's fields. Instead, I will create another new item with all the new information and keep the old item untouched. I feel that this way is better because I can easily extract information about the transaction later on. However, the bad side is that my Item table may contain a lot of rows. I'd be very grateful if someone could give me an advice on how I should tackle this problem. UPDATE: I'd like to add more information about the current design. public class Customer implements Serializable { @OneToMany private List<CustomerTransactions> listOfTransactions; } public class CustomerTransactions implements Serializable { @ManyToMany private List<Item> listOfItemsBought; } public class Merchant implements Serializable { @OneToMany private List<Item> listOfSellingItems; }

    Read the article

  • Naming the implementation version of an interface function

    - by bolov
    When I need to write an implementation version of an interface function, I put the implementation function within a impl namespace, but with the same name as the interface function. Is this a bad practice? (the same name part, the namespace part I am confident it’s more than OK). For me, who I write the code, there is no confusion between the two, but I want to make sure this isn’t confusing for someone else. One other option would be to append impl suffix to the function name, but since it is already in a separate namespace named impl it seems redundant. Is there an idiomatic way to do this? E.g.: namespace n { namespace impl { // implementation function (hidden from users) // same name, is it ok? void foo() { // ... //sometimes it needs to call recursively or to call overloads of the interface version: foo(); // calls the implementation version. Is this confusing? n::foo(); // calls the interface version. Is this confusing? // ... } // namespace impl // interface function (exposed to users) void foo() { impl::foo(); } } // namespace n

    Read the article

  • Why is heap size fixed on JVMs?

    - by themel
    Can anyone explain to me why JVMs (I didn't check too many, but I've never seen one that didn't do it that way) need to run on a fixed heap size? I know it's easier to implement on a simple contiguous heap, but the Sun JVM is now over a decade old, so I'd expect them to have had time to improve this. Needing to define the maximum memory size of your program at startup time seems such a 1960s thing to do, and then there are the bad interactions with OS virtual memory management (GC retrieving swapped out data, inability to determine how much memory the Java process is really using from the OS side, huge amounts of VM space wasted (I know, you don't care on your fancy 48bit machines...)). I also guess that the various sad attempts to build small operating systems inside the JVM (EE application servers, OSGi) are at least partially to blame on this circumstance, because running multiple Java processes on a system invariably leads to wasted resources because you have to give each of them the memory it might have to use at peak. Surprisingly, Google didn't yield the storms of outrage over this that I would expect, but they may just have been buried under the millions of people finding out about fixed heap size and just accepting it for a fact.

    Read the article

< Previous Page | 283 284 285 286 287 288 289 290 291 292 293 294  | Next Page >