Search Results

Search found 43110 results on 1725 pages for 'noob question'.

Page 264/1725 | < Previous Page | 260 261 262 263 264 265 266 267 268 269 270 271  | Next Page >

  • Biggest mistake you've ever made

    - by Rogue Coder
    Similar to the question I read on Server Fault, what is the biggest mistake you've ever made in an IT related position. Some examples from friends: I needed to do some work on a production site so I decided to copy over the live database to the beta site. Pretty standard, but when I went to the beta site it was still pulling out-of-date info. OOPS! I had copied the beta database over to the live site! Thank god for backups. And for me, I created a form for an event that was to be held during a specific time range. Participants would fill out the form for a chance to win, and we would send the event organizers a CSV from the database. I went into the database, and found ONLY 1 ENTRY, MINE. Upon investigating, it appears as though I forgot an auto increment key, and because of the server setup there was no way to recover the lost data. I am aware this question is similar to ones on Stack Overflow but the ones I found seemed to receive generic answers instead of actual stories :) What is the biggest coding error/mistake ever…

    Read the article

  • What triggered the popularity of lambda functions in modern mainstream programming languages?

    - by Giorgio
    In the last few years anonymous functions (AKA lambda functions) have become a very popular language construct and almost every major / mainstream programming language has introduced them or is planned to introduce them in an upcoming revision of the standard. Yet, anonymous functions are a very old and very well-known concept in Mathematics and Computer Science (invented by the mathematician Alonzo Church around 1936, and used by the Lisp programming language since 1958, see e.g. here). So why didn't today's mainstream programming languages (many of which originated 15 to 20 years ago) support lambda functions from the very beginning and only introduced them later? And what triggered the massive adoption of anonymous functions in the last few years? Is there some specific event, new requirement or programming technique that started this phenomenon? IMPORTANT NOTE The focus of this question is the introduction of anonymous functions in modern, main-stream (and therefore, maybe with a few exceptions, non functional) languages. Also, note that anonymous functions (blocks) are present in Smalltalk, which is not a functional language, and that normal named functions have been present even in procedural languages like C and Pascal for a long time. Please do not overgeneralize your answers by speaking about "the adoption of the functional paradigm and its benefits", because this is not the topic of the question.

    Read the article

  • Best industry to work for as a developer.

    - by The Elite Gentleman
    Hi guys, Hmmm, StackOverflow now warns me of: "The question you're asking appears subjective and is likely to be closed." I've been an avid java (J2SE, JEE) developer for over 5 years now (and I'm not complaining, even though I want to go back to Delphi & C++). My contract has just ended and I'm wondering of possible job to further tackle. I've done banking and insurance industry for all my career (with 1 year on a Fortuner 500 company) and pretty much, banking is the slowest (and boring) industry to work for (IMHO) since they're strict in their business practices (fair enough). The upside is that they pay. My question is: What is the best industry for a developer, who tends to get bored relatively quickly, to work for? Is it also worthwhile for me to do consultancy (and if so, what type of consultancy)? PS There's no gaming industry in South Africa, so suggesting it requires that I have to travel to a country where gaming is alive! I don't see the Community Wiki checkbox, so I don't know how to make this a wiki.

    Read the article

  • Storing data for use on Android and Windows Applications

    - by Andy Mepham
    I posted this last night on StackOverflow and was advised to move it over to StackExchange, thank you for taking a moment to look at my question. I'm developing a project proposal for my final year project at University and as I aim to use programming languages I am currently not too familiar with I'm looking for some guidance - I can't include details of my project but hopefully you will understand what I'm after. I'm going to be creating an Android application (in Java) and a Windows Application (in C#) that will ideally access, query and update a remotely hosted Database or set of XML files (this would most likely be over the Internet). I've done some looking around the internet and SQLite seems like a safe-bet for cross-platform manipulation of the database; however I would like to keep the system as lightweight as possible and I'm wondering whether XML files may provide a better alternative? Anyone out there that has experience using SQLite and/or remotely hosted XML for the purposes of Android and/or C# development that could point me in the right direction? If there is an alternative solution other than those I have mentioned I would be interested to hear about them too. Thank you for taking the time to read my question. Edit: The purpose of this application is for a small scale business, the data source would not need to be updated by more than one source but may be view from multiple sources (i.e. through multiple phones and a desktop PC). The database wouldn't be updating masses of data at a time (most likely single rows of a few tables at the most).

    Read the article

  • Making The EBS Upgrade From 11.5.10 Easier - Part III

    - by Annemarie Provisero
    ADVISOR WEBCAST: Making The EBS Upgrade From 11.5.10 Easier - Part III PRODUCT FAMILY: E-Business Suite July 19, 2011 at 8 am PT, 9 am MT, 11 am ET This one-hour session is recommended for technical users who are responsible for upgrading their E-Business Suite applications from Release 11.5.10 to Release 12.1.x. As you begin your upgrade process, there are a number of tools available to assist you in a successful upgrade. A successful upgrade requires careful planning, correct upgrade processing, detailed testing, and user (re)training prior to upgrade. Over three sessions we will discuss the tools that you can use to assist in your upgrade tasks. These tools are available to you via My Oracle Support and as part of the E-Business Suite product offerings. In this third session, we’ll cover the Best Practices for Using The Upgrade Tools. Additionally, this session includes an extended question and answer period. In the first part of the three-session series, we covered the following topics: Overview of Tools Available for Upgrading Upgrade versus Re-implementing Upgrade Community Upgrade Product Information Center Page Detailed Look at Upgrade Advisor In the second session, we covered the following topics: Recap of Part I Detailed Look at Maintenance Wizard Detailed Look at Patch Wizard A replay of those sessions is available via Note 740964.1, Advisor Webcast Archive. A short, live demonstration (only if applicable) and question and answer period will be included. Oracle Advisor Webcasts are dedicated to building your awareness around our products and services. This session does not replace offerings from Oracle Global Support Services. Click here to register for this session ------------------------------------------------------------------------------------------------------------- The above webcast is a service of the E-Business Suite Communities in My Oracle Support. For more information on other webcasts, please reference the Oracle Advisor Webcast Schedule.Click here to visit the E-Business Communities in My Oracle Support Note that all links require access to My Oracle Support.

    Read the article

  • Documenting your database with Visual Studio 2012 SSDT tools

    - by krislankford
    The title of this post is interesting and something I am wishing you and your colleagues had a better way to do. I understand as I am asked this question frequently. I couple of weeks ago I was asked the same question by a customer who documents their database using the ApexSQL Doc tools which uses the extended properties on objects to create automated documentation. I thought that was super interesting and went down the path to see how we could could support the creation of this documentation while leveraging the Visual Studio 2012 SSDT Tools. What I found is was rather intriguing. There is a property called “Description” on all objects in the SSDT tools. This property is rather subtle and I am betting overlooked. To be honest, this property has probably been there for a while and I just never discovered it. Adding text to this '”Description” property it allows Visual Studio to create the commands for the extended properties directly to your schema which should be version controlled. As I did more digging there seemed to be extended properties at every level in the SQL database objects. This fills some rather challenging gaps and allows organizations to manage SQL Schema using the Visual Studio SQL database tools while allowing a way to automatically document the database. This will also work in the automation of the creation and alter scripts that can be generated as part f an automated build system. Now we essentially get a way to store, build and document the database in a nice little ALM package. Happy Coding!

    Read the article

  • Grading an algorithm: Readability vs. Compactness

    - by amiregelz
    Consider the following question in a test \ interview: Implement the strcpy() function in C: void strcpy(char *destination, char *source); The strcpy function copies the C string pointed by source into the array pointed by destination, including the terminating null character. Assume that the size of the array pointed by destination is long enough to contain the same C string as source, and does not overlap in memory with source. Say you were the tester, how would you grade the following answers to this question? 1) void strcpy(char *destination, char *source) { while (*source != '\0') { *destination = *source; source++; destionation++; } *destionation = *source; } 2) void strcpy(char *destination, char *source) { while (*(destination++) = *(source++)) ; } The first implementation is straightforward - it is readable and programmer-friendly. The second implementation is shorter (one line of code) but less programmer-friendly; it's not so easy to understand the way this code is working, and if you're not familiar with the priorities in this code then it's a problem. I'm wondering if the first answer would show more complexity and more advanced thinking, in the tester's eyes, even though both algorithms behave the same, and although code readability is considered to be more important than code compactness. It seems to me that since making an algorithm this compact is more difficult to implement, it will show a higher level of thinking as an answer in a test. However, it is also possible that a tester would consider the first answer not good because it's not readable. I would also like to mention that this is not specific to this example, but general for code readability vs. compactness when implementing an algorithm, specifically in tests \ interviews.

    Read the article

  • How best to keep bumbling, non-technical managers at bay and still deliver good work?

    - by Curious
    This question may be considered subjective (I got a warning) and be closed, but I will risk it, as I need some good advice/experience on this. I read the following at the 'About' page of Fog Creek Software, the company that Joel Spolsky founded and is CEO of: Back in the year 2000, the founders of Fog Creek, Joel Spolsky and Michael Pryor, were having trouble finding a place to work where programmers had decent working conditions and got an opportunity to do great work, without bumbling, non-technical managers getting in the way. Every high tech company claimed they wanted great programmers, but they wouldn’t put their money where their mouth was. It started with the physical environment (with dozens of cubicles jammed into a noisy, dark room, where the salespeople shouting on the phone make it impossible for developers to concentrate). But it went much deeper than that. Managers, terrified of change, treated any new idea as a bizarre virus to be quarantined. Napoleon-complex junior managers insisted that things be done exactly their way or you’re fired. Corporate Furniture Police writhed in agony when anyone taped up a movie poster in their cubicle. Disorganization was so rampant that even if the ideas were good, it would have been impossible to make a product out of them. Inexperienced managers practiced hit-and-run management, issuing stern orders on exactly how to do things without sticking around to see the farcical results of their fiats. And worst of all, the MBA-types in charge thought that coding was a support function, basically a fancy form of typing. A blunt truth about most of today's big software companies! Unfortunately not every developer is as gutsy (or lucky, may I say?) as Joel Spolsky! So my question is: How best to work with such managers, keep them at bay and still deliver great work?

    Read the article

  • Google analytics - drop in traffic

    - by Andy
    Bit of a general question here. We are in the process of converting a number of our clients from older web sites to new ones. The problem we are getting, and sorry for being so general here, is we are getting a sharp decline in traffic as reported on Google Analytics. It's not a gradual decline, it seems to hit almost as soon as the new site goes live. I've just got a few questions to see if there is something we are doing wrong: a) We are using the same analytics accounts going from old to new site. Is this a bad idea? b) The actual analytics code is integrated into the pages using a server-side include. IS this a bad idea? c) We structure our sites differently to our old site. IE. The old sites would pretty must have all the web pages in the root directory, and hyperlinks would be linked to the page files: EG. <a href="somepage.aspx">Link</a> Our new sites now have a directory structure that pretty much reflects the navigation structure, and hyper links link to the pages directory instead of the actual page: EG. <a href="/new-items/shoes/">New shoes</a> Is this a bad idea. I'm really searching for a needle in a haystack here. Would appriciate any help or advice as to why we are getting such a sharp and sudden drop in traffic. Again, so this is such a general question. Thanks in advance.

    Read the article

  • 2D Rectangle Collision Response with Multiple Rectangles

    - by Justin Skiles
    Similar to: Collision rectangle response I have a level made up of tiles where the edges of the level are made up of collidable rectangles. The player's collision box is represented by a rectangle as well. The player can move in 8 directions. The player's velocity is equal in X and Y directions and constant. Each update, I am checking the player's collision against all tiles that are a certain distance away. When the player collides with a rectangle, I am finding the intersection depth and resolving along the most shallow axis followed by the other axis. This resolution happens for both axes simultaneously. See below for two examples of situations where I am having trouble. Moving up-left against the left wall In the scenario below, the player is colliding with two tiles. The tile intersection depth is equal on both axes for the top tile and more shallow in the X axis for the middle tile. Because the player is moving up the wall, the player should slide in an upward direction along the wall. This works properly as long as the rectangle with the more shallow depth is evaluated first. If the equal intersection depth rectangle is evaluated first, there is a chance the player becomes stuck. Moving up-left against the top wall Here is an identical scenario with the exception that the collision is with the top wall. The same problem occurs at the corners when intersection depth is equal for both axes. I guess my overall question is: How can I ensure that collision response occurs on tiles that have non-equal intersection depth before tiles that have equal intersection depth in order to get around the weirdness that occurs at these corners. Sean's answer in the linked question was good, but his solution required having different velocity components in a certain direction. My situation has equal velocities, so there's no good way to tell which direction to resolve at corners. I hope I have made my explanation clear.

    Read the article

  • Many small scripts, one repository or multiple?

    - by The Jug
    A co-worker and myself have run into an issue that we have multiple opinions on. Currently we have a git repository that we are keeping all of our cronjobs in. There are about 20 crons and they are not really related except for the fact that they are all small python scripts and essential for some activity. We are using a fabric.py file to deploy and a requirements.txt file to manage requirements for all of the scripts. Our issue is basically, do we keep all of these scripts in one git repository or should we be separating them out into their own repositories? By keeping them in one repository it is easier to deploy them onto one server. We can use just one cron file for all the scripts. However this feels wrong, as the 20 cronjobs are not logically related. Additionally, when using one requirements.txt file for all the scripts, it's hard to figure out what the dependencies are for a particular script and they all have to use the same versions of packages. We could separate all of the scripts out into their own repositories but this creates 20 different repositories that need to be remembered and dealt with. Most of these scripts are not very large and that solution seems to be overkill. A related question is, do we use one big crontab file for all cronjobs, or a separate file for each? If each has their own, how does one crontab's installation avoid overwriting the other 19? This also seems like a pain as there would then by 20 different cron files to keep track of. In short, our main question and issue is do we keep them all closely bundled as one repository or do we separate them out into their own repository with their own requirements.txt and fabfile.py? We feel like we're also probably looking over some really simple solution. Is there an easier way to deal with this issue?

    Read the article

  • Ray Tracing concers: Efficient Data Structure and Photon Mapping

    - by Grieverheart
    I'm trying to build a simple ray tracer for specific target scenes. An example of such scene can be seen below. I'm concerned as to what accelerating data structure would be most efficient in this case since all objects are touching but on the other hand, the scene is uniform. The objects in my ray tracer are stored as a collection of triangles, thus I also have access to individual triangles. Also, when trying to find the bounding box of the scene, how should infinite planes be handled? Should one instead use the viewing frustum to calculate the bounding box? A few other questions I have are about photon mapping. I've read the original paper by Jensen and many more material. In the compact data structure for the photon they introduce, they store photon power as 4 chars, which from my understanding is 3 chars for color and 1 for flux. But I don't understand how 1 char is enough to store a flux of the order of 1/n, where n is the number of photons (I'm also a bit confused about flux vs power). The other question about photon mapping is, if it would be more efficient in my case to store photons per object (or even per Object's triangle) instead of using a balanced kd-tree. Also, same question about bounding box of the scene but for photon mapping. How should one find a bounding box from the pov of the light when infinite planes are involved?

    Read the article

  • Infrastructure to effectively set up experiements and learn from them

    - by David
    Open-org.com is in the early stages of creating our first product, a place on the web, where one can ask lawyers questions at a fraction of their normal costs. An early stage front page can be found here. I got inspired by this video, which is recommended by Jeff Atwood, which talks about getting feedback faster, which is the reason for this question. The problem Needless to say, we want our conversion rates to be as high as possible. Therefore, we want to be able to rapidly set up a new experiment where we change something on the site (like moving an image slightly, rewriting a sentence etc.). We then want to present the modified page to a random subset of the users. After that we will compare the conversion rates of the experiment with another version. I could very well imagine that we want to run 10-100 experiments simultaneously and it would be nice to have features, where experiments that obviously have worse results will be ended before schedule. My question Does infrastructure to support the whole process exist? A short description of our infrastructure... We use EC2 and PHP and have a script to automatically start up new instances with all needed software. Still, starting up a new server for every experiment, seems like a bit of overkill, so I am wondering what other options exist. Btw. If you feel like working for Open-org.com, you can pick a task, and start working, or suggest a new task. All profits are given out to the contributors.

    Read the article

  • How to motivate visitors to comment

    - by Michal
    At first I must apologize, because I am not sure if this question is valid for webmasters topic. I deal with the problem as being webmaster, however, i think this question is more related with marketing. Nevertheless, I was searching for marketing stack-overflow at meta stack-overflow and did not find such page. Background Four days ago, I launched a portal with database of barber salons at which people can find a salon through various criterions, see its photos, details, and also put a comment with their own opinion. The development took me half a year and it took me other 2 months to fill the database with information about barbers (I've also hired another three people to this job). I have not a big problem with getting people to my portal, I pay for PPC, comment on barber discussions etc.. In past four days I've reached a satisfactory number of visitors. Problem I deal with fact that everyone wants to search and read comments, but no one is willing to put her/his own opinion to barber. So I've tried following (2 days ago): Made comment anonymous, no one has to be afraid of compromise her/his identity with a salon owner I prepared a competition for users in which they can win a cosmetic package if they comment on at least three different salons I payed for PPC campaign on facebook which is telling people about the competition I registered competition on 20 portals for competitions And the result: People are commenting on facebook that the competition is a good idea They are giving likes on facebook But no one put a single comment to a barber salon I am getting little confused about what am I doing wrong. I will be thankful for any advice.

    Read the article

  • Friday Tips #5

    - by Chris Kawalek
    Happy Friday, everyone! Following up on yesterday's post about Oracle VM VirtualBox being selected as the best virtualization solution for 2012 by the readers of Linux Journal, our Friday tip is about that very cool piece of software: Question: How do I move a VM from one machine to another with Oracle VM VirtualBox? Answer by Andy Hall, Product Management Director, Oracle Desktop Virtualization: There are a number of ways to do this, with pros and cons for each. The most reliable approach is to Export and Import virtual machines: From the VirtualBox manager, simply use the File…Export appliance menu and follow the wizard's lead. Move the resulting file(s) to the destination machine; and Import the VM into VirtualBox. This method will take longer and use more disk space than other methods because the configuration files and virtual hard drives are converted into an industry standard format (.ova or .ovf). But an advantage of this approach is that the creator of the virtual appliance can add a license which the importer will see and click-to-accept at import time. This is especially useful for ISVs looking to deliver pre-built, configured and tested appliances to their customers and prospects. Thanks Andy! Remember, if you have a question for us, use Twitter hashtag #AskOracleVirtualization. We'll see you next week! -Chris 

    Read the article

  • What are the advantages of programming to under an OS as opposed to bare metal executive?

    - by gby
    Assume you are presented with an embedded system application to program, in C, on a multi-core environment (think a Cavium or Tilera) and need to choose between two environments: Code the application under Linux in SMP mode or code the application under a thin bare metal executive (something like a very minimal RTOS), perhaps with a single core running UP Linux that can serve control tasks. For the purpose of this question, assume that both environment provide the same level of performance guarantees in any measurable aspects of run time performance, including number of meaningful action per second, jitter, latency, real time considerations - the works. (and yes, I realize this is by far not a trivial assumption at all, bare with me). How would you justify going with a Linux SMP based solution rather then a bare metal thin executive solution? The question may seems silly. It certainly seems obvious to me - but I have to convince someone that does not think the same. Could you help make a list of arguments in favor of choosing a real SMP aware OS (Linux) vs. a bare metal executive assuming performance guarantees are NOT an issue? Many thanks

    Read the article

  • Should I start MCPD training now or wait for new exams?

    - by lunchmeat317
    i apologize if this question has been asked before, or if this is the wrong place to put it. I'm beginning my study track for the MCPD certification in Web Development. However, Microsoft plans to retire this certification on July 31st of 2013, along with two of the necessary tests to receive the certification. On MS's site, I can't find a newer certification path to take - I imagine that Microsoft will release new certification paths and new tests for their new software, but I don't know when that will happen. I don't really know anything about Microsoft's process, as this is the first Microsoft certification I'll be studying for. The bottom line is this - I don't want to lose six months waiting for a new test to appear that won't expire, but I don't want to rush to get a certification that will be invalid in six months (or have to reset any progress due to new study material). To those with experience in affairs like this - what is the best course to take, and can I maximize the time I have now (not wait for new testing material)? Is there any way to find material for the new tests that Microsoft will be rolling out? Thank you for your patience. If this is the wrong place to put this question, I would like to request that it be moved to the correct StackExchange site instead of being closed. Thanks for your help!

    Read the article

  • My Visual Studio Demo Video Link disappeared &ndash; How do I get it back?

    - by Tarun Arora
    ***Special thanks to Adam Cogan for asking this question and to Andrew Bragdon for answering this question on the ALM Champs list.*** 1. Problem – The link to demo videos will disappear once you have watched the video Learning Visual Studio has become easier than ever with the Visual Studio How to Videos hosted inside of Visual Studio showing up in the context of the task you are trying to achieve. For instance when you click code review in team explorer you can see the link “Streaming Video: Using Code Review to improve quality” when you click this link the video stream is delivered to you right with in Visual Studio. Next time you run Visual Studio you will notice that the home page has a check mark in the video “Using Code Review to improve quality”. If you navigate to code review in the myWork hub in the team explorer, you will notice that the link “Streaming Video: Using Code Review to improve quality” does not show up any more.         2. Solution – How to get the Demo Videos link back Warning: Editing the registry can lead to serious problems if not done correctly.  Always backup your registry before editing. This solution is neither suggested nor supported by Microsoft. Type regedit on the run command prompt to open the Registry editor Navigate to the path Computer\HKEY_CURRENT_USER\Software\Microsoft\VisualStudio\11.0\UltimateStartPage\VideoState and notice the newly created folder “TeamExplorer.CodeReview”, notice the key Watched is set to 1.         Change the value of the key ‘Watched’ to 0 Restart Visual Studio and Navigate to Code Review in myWork hub and voila, the link to stream the video is back!            Watch and enjoy the Demo videos to your hearts content!

    Read the article

  • How to break the "php is a bad language" paradigm? [closed]

    - by dukeofgaming
    PHP is not a bad language (or at least not as bad as some may suggest). I had teachers that didn't even know PHP was object oriented until I told them. I've had clients that immediately distrust us when we say we are PHP developers and question us for not using chic languages and frameworks such as Django or RoR, or "enterprise and solid" languages such as Java and ASP.NET. Facebook is built on PHP. There are plenty of solid projects that power the web like Joomla and Drupal that are used in the enterprise and governments. There are frameworks and libraries that have some of the best architectures I've seen across all languages (Symfony 2, Doctrine). PHP has the best documentation I've seen and a big community of professionals. PHP has advanced OO features such as reflection, interfaces, let alone that PHP now supports horizontal reuse natively and cleanly through traits. There are bad programmers and script kiddies that give PHP a bad reputation, but power the PHP community at the same time, and because it is so easy to get stuff done PHP you can often do things the wrong way, granted, but why blame the language?. Now, to boil this down to an actual answerable question: what would be a good and solid and short and sweet argument to avoid being frowned upon and stop prejudice in one fell swoop and defend your honor when you say you are a PHP developer?. (free cookie with teh whipped cream to those with empirical evidence of convincing someone —client or other— on the spot) P.S.: We use Symfony, and the code ends being beautiful and maintainable

    Read the article

  • What is the best way to render a 2d game map?

    - by Deukalion
    I know efficiency is key in game programming and I've had some experiences with rendering a "map" earlier but probably not in the best of ways. For a 2D TopDown game: (simply render the textures/tiles of the world, nothing else) Say, you have a map of 1000x1000 (tiles or whatever). If the tile isn't in the view of the camera, it shouldn't be rendered - it's that simple. No need to render a tile that won't be seen. But since you have 1000x1000 objects in your map, or perhaps less you probably don't want to loop through all 1000*1000 tiles just to see if they're suppose to be rendered or not. Question: What is the best way to implement this efficiency? So that it "quickly/quicker" can determine what tiles are suppose to be rendered? Also, I'm not building my game around tiles rendered with a SpriteBatch so there's no rectangles, the shapes can be different sizes and have multiple points, say a curved object of 10 points and a texture inside that shape; Question: How do you determine if this kind of objects is "inside" the View of the camera? It's easy with a 48x48 rectangle, just see if it X+Width or Y+Height is in the view of the camera. Different with multiple points. Simply put, how to manage the code and the data efficiently to not having to run through/loop through a million of objects at the same time.

    Read the article

  • Missing Package: header, Problem with MergeList, The package lists or status file could not be parsed or opened

    - by Inbar Rose
    THIS IS NOT A DUPLICATE OF SIMILAR QUESTIONS (like this) I just had to write that first, there are tons of questions similar to this, all of them have the same redirect to an answer that does not solve my problem, because I don't have the same problem, just the same symptom. I write tests for my companies application. One of these tests tries to upgrade the application from a previous version to a new version to make sure nothing breaks. When I am installing an old version of the application, some weird stuff starts to happen. Sometimes everything goes Okay, and nothing is wrong, other times when trying to install I get this message (company app name censored): E: Encountered a section with no Package: header E: Problem with MergeList /var/lib/apt/lists/XXX-amd64_Packages E: The package lists or status file could not be parsed or opened. Using the solutions provided in the questions similar to this one (like this). Do not help, and the problem keeps repeating itself once it happens the first time. This has led me to believe something is wrong on the apt server where the package is being created, but searching these errors yields no information on anything beyond the "fix" suggested in the question I linked, the only other source of information I could find also did not help (here): So I am asking for information; What is the actual problem? What causes the problem? What can fix the problem? I hope this question is in good format, if there is a problem, or missing information I can move to chat.

    Read the article

  • Are the Ubuntu ISO images updated From release .ubuntu.com

    - by tijybba
    Just got idea from this(may not be related though) question however. Are the ISO images from the official site updated with updates in Core Ubuntu system , like Kernel Updates , desktop Environment Updates(unity), i mean Updates of BASE system including X-org, Office suite, Package Manager , Update manager or Gnome Base Modules, those released in update Branches like precise-Updates branch. The reason i am asking this is , if i download the ISO image of Ubuntu 12.04 Say after two or three months from release , i have to do an update of approximately 200~300 MB's size. So why are these ISO images not updated to recent updates, i am aware that all of the components are not updated at the same time , but let's say after One month from actual release ( Both LTS and normal releases), the updated components can be added to form a Updated ISO in regular intervals, which provides new users to use latest versions and features with improved stability and less bandwidth Consumption. I am not mentioning the idea of comparison to Rolling Release , or External PPA's added updates , and neither Netinstal but the ISO of updated Packages .This can be provided as optional download. Since my question is within the boundary of Official updates releases so stability could not be the reason. I guess there are custom packagers out there , but having an official option would be better. It helps in distributing Newest ISO OS which impresses a lot new users , since it makes availability of newer features and a faster system ofcourse. Another reason of asking this is here. Edit: Since almost all new (Desktop) users download the Default ISO's having one or few issue , which may have been corrected in following updates. But most of new laptop users i encountered gave up because of it , so should i suggest ,for laptop not listed on Certified H/W list , to try daily Builds , if needed.

    Read the article

  • Is there any research out there on geographic differences in work environments (e.g., respect) for programmers?

    - by Ethel Evans
    One thing I've learned from this website is that software developers are not treated the same as what I've seen in the companies I've worked at, and some of the differences seem to be related to the culture or other factors of the geographical location where the programmer works. In some areas, it seems like programmers can expect many perks and a great deal of professional respect, but in others it sounds like programmers are seen as laborers who are told what to do and then should go do it without question. Even in just the USA, there seem to be major differences in "the norm" between the various regions of this country. I'm wondering how much of this is just my perception, and how much is real differences about how programmers are perceived in their different locations. Is there any research out there discussing major differences in programmer work environments or attitudes about how to treat or respect programmers by geography? I'd be interested in multiple articles tackling different ways of looking at this. Edit: Research, specifically, doesn't seem to be available, so I'm making the question broader. Any good, thoughtful writing on the topic of any kind available?

    Read the article

  • Why don't I have a loop error with these redirects?

    - by byronyasgur
    I know this may seem a bit of a question in reverse, but I actually don't seem to have a problem I just want to make sure before I proceed. I have 2 domains domain1.com and domain2.com and a directory my_directory at domain2.com. I have domain2.com setup as an "add on domain" in the cpanel account of domain1.com so that when I go to domain2.com I am taken to domain1.com/my_directory but the browser shows domain2.com in the addressbar so it looks and acts like and is a separate site. However when people browse to domain1.com/directory I want the address bar to show domain2.com not domain1.com/directory. So I put a redirect in the htaccess file to redirect domain1.com/directory to domain2.com and it works perfectly, but I think it shouldnt and I'm worried I've done something wrong. My question is this: domain2.com was already redirected to domain1.com/directory in the first place (I see the redirect in my cpanel under addon domains) so by adding the second redirect in the htaccess file I should be creating a loop! Could somebody please set my mind at rest and show me why not?

    Read the article

  • DualLayout OpenSourceFood demo site installation instructions

    - by svdoever
    We released DualLayout which enables advanced web design with the power of SharePoint. DualLayout and a demo site can be downloaded from the DualLayout product page. This blogpost contains detailed instructions on installing the demo site. The demo site is based on the site http://opensourcefood.com. The demo site requires internet access because it still links to pages and resources of the real site. Execute the following steps to install the demo site: Copy the OpenSourceFoodDemo.zip file to your SharePoint Server 2010 Make sure that the zip file in “unblocked”, otherwise files are assumed from other computer (right-click on zip file, press “Unblock” button if available) Unzip the OpenSourceFoodDemo.zip to a folder of your choice (c:\OpenSourceFoodDemo) Open the SharePoint  Start->Microsoft SharePoint 2010 Products->SharePoint 2010 Management Shell Change directory to the unzip folder (cd c:\OpenSourceFoodDemo) Start install script: .\InstallDemoSite.ps1 Answer the questions, default values in most cases ok. A little guidance: Question: Give credentials for the account that will be used for the application pool Answer: use for example same account as used for the application pool of your SharePoint site (lookup in IIS Manager) Question: Give credentials for the account that will be used for the application pool Answer: Use same account you are currently logged in with The demo site is made available through a backup and restore. The SharePoint Server 2010 installation must be patched to a level equal or higher to the update level on the SharePoint Server used to create the backup. If you get errors with respect to restore check http://technet.microsoft.com/en-us/sharepoint/ff800847.aspx for downloading the latest cumulative update.

    Read the article

< Previous Page | 260 261 262 263 264 265 266 267 268 269 270 271  | Next Page >