Search Results

Search found 16800 results on 672 pages for 'alan long'.

Page 313/672 | < Previous Page | 309 310 311 312 313 314 315 316 317 318 319 320  | Next Page >

  • My PC becomes very slow after changing page file in XP.

    - by Jane
    Hello there, I have 5 partitions in my computer (C, D, E, F, G), and C: is the system partition. Recently I changed the page file setting to use 768-768 MB from the G:, without changing the C: default value (256-512 MB). Everything runs fine until I rebooted my PC. I unset the G:'s page file before I rebooted my PC. Well, after I rebooted my PC, my XP becomes VERY slow. damn slow. I don't know how long I wasted to wait for the boot screen to disappear. (Well, maybe about 1-3 mins. Normally it is under 10 secs) and when I tried to enter the desktop, it just like using a computer without a VGA card, or VGA driver installed. Very slow. Any suggestions to fix this problem? Because I hate to repair-install my XP =\ Thanks.

    Read the article

  • Which is the best image hosting site for hosting images for the website? [closed]

    - by rahul dagli
    Possible Duplicate: How to find web hosting that meets my requirements? I currently have a website and blog and using a limited web hosting plan. When I upload images on my hosting server it consumes a lot of bandwidth and space. So I was thinking of hosting images on some-other image hosting site and direct linking it to my site. I found out few sites like imageshack, photobucket, tinypic, imgur. However, I see all have certain restrictions. The features i am looking for are as follows: 1. At least 10gb space 2. At least 500gb bandwidth (bec I hav very high traffic) 3. Very high speed even during heavy load like 1000 visitors accessing every hour. 4. Ultra reliable servers (99.9% uptime) 5. Privacy control 6. Must not ever delete image if inactive 7. Create and manage albums 8. Company that will last long in business atleast for next 10 years. 9. Free of cost 10. Hotlinking/ Directlinking image.

    Read the article

  • What triggered the popularity of lambda functions in modern mainstream programming languages?

    - by Giorgio
    In the last few years anonymous functions (AKA lambda functions) have become a very popular language construct and almost every major / mainstream programming language has introduced them or is planned to introduce them in an upcoming revision of the standard. Yet, anonymous functions are a very old and very well-known concept in Mathematics and Computer Science (invented by the mathematician Alonzo Church around 1936, and used by the Lisp programming language since 1958, see e.g. here). So why didn't today's mainstream programming languages (many of which originated 15 to 20 years ago) support lambda functions from the very beginning and only introduced them later? And what triggered the massive adoption of anonymous functions in the last few years? Is there some specific event, new requirement or programming technique that started this phenomenon? IMPORTANT NOTE The focus of this question is the introduction of anonymous functions in modern, main-stream (and therefore, maybe with a few exceptions, non functional) languages. Also, note that anonymous functions (blocks) are present in Smalltalk, which is not a functional language, and that normal named functions have been present even in procedural languages like C and Pascal for a long time. Please do not overgeneralize your answers by speaking about "the adoption of the functional paradigm and its benefits", because this is not the topic of the question.

    Read the article

  • C++ Iterator lifetime and detecting invalidation

    - by DK.
    Based on what's considered idiomatic in C++11: should an iterator into a custom container survive the container itself being destroyed? should it be possible to detect when an iterator becomes invalidated? are the above conditional on "debug builds" in practice? Details: I've recently been brushing up on my C++ and learning my way around C++11. As part of that, I've been writing an idiomatic wrapper around the uriparser library. Part of this is wrapping the linked list representation of parsed path components. I'm looking for advice on what's idiomatic for containers. One thing that worries me, coming most recently from garbage-collected languages, is ensuring that random objects don't just go disappearing on users if they make a mistake regarding lifetimes. To account for this, both the PathList container and its iterators keep a shared_ptr to the actual internal state object. This ensures that as long as anything pointing into that data exists, so does the data. However, looking at the STL (and lots of searching), it doesn't look like C++ containers guarantee this. I have this horrible suspicion that the expectation is to just let containers be destroyed, invalidating any iterators along with it. std::vector certainly seems to let iterators get invalidated and still (incorrectly) function. What I want to know is: what is expected from "good"/idiomatic C++11 code? Given the shiny new smart pointers, it seems kind of strange that STL allows you to easily blow your legs off by accidentally leaking an iterator. Is using shared_ptr to the backing data an unnecessary inefficiency, a good idea for debugging or something expected that STL just doesn't do? (I'm hoping that grounding this to "idiomatic C++11" avoids charges of subjectivity...)

    Read the article

  • please help me understand libraries/includes

    - by fiftyeight
    I'm trying to understand how libraries work. for example I downloaded a tarball and extracted it. Now I do "./configure", it searches in pre-defined directories from what I understand for certain library files. What does it do then? it creates a makefile, and the makefile contains the paths to these libraries? than I do "make", it complies the source code and hard-codes the locations of the libraries? am I correct? I do not really understand if libraries are files with pre-defined paths or the OS somehow gives access to the libraries through system calls. another example, I complied something on my computer than moved it to a remote server, the executable needs mysql libraries to work, the server has mysql but for some reason when execute the file it tells me "can't find libmysqlclient.so.16". is there a solution for this? is there a way to know where is tries to locate this file or give it another path? I can't compile it on the server since it has no compiler and I don't have root access to install packages last question is if in the sequence "./configure","make","make install" the "make install" command is the only one that actually puts files outside the directory in which these files reside? if for example the software will be installed in /usr/local/ is the "make install" command the only one that will require "sudo" before it? let me see if I got it correctly: "./configure" creates the Makefile according to the location of various files on your system. "make" compiles the source code according to this makefile. and "make install" moves the files to their appropriate location. I know this has been very long I thank anyone who had the patience to read my question :)

    Read the article

  • 503 service unavailable when debugging PHP script in Zend Studio

    - by user25932
    I have a web server with apache 2.0 installed. It comes with Zend Server install pack. When I’m trying to debug my php files apache serves a blank page with 503 service unavailable. Of course slow server-side code is tying up Apache requests for far too long, but I need it to wait, until my debugging comes to end. When I call to the page from a browser it launches ZendStudio debugging my PHP script (request redirects Zend Debugger module). I debug through my script and if I finish debugging in 120 seconds, I normally return to the browser. When it takes more than 120 seconds the browser displays '503 service unavailable' and I can't return to page output. I have even forced 'max_execution_time = 300' 'max_input_time = 600' in php.ini and 'TimeOut = 500' in httpd.conf. No matter whether it is Opera, IE or Firefox. I spent two days googling it, no right answer until now.

    Read the article

  • Generic Repositories with DI & Data Intensive Controllers

    - by James
    Usually, I consider a large number of parameters as an alarm bell that there may be a design problem somewhere. I am using a Generic Repository for an ASP.NET application and have a Controller with a growing number of parameters. public class GenericRepository<T> : IRepository<T> where T : class { protected DbContext Context { get; set; } protected DbSet<T> DbSet { get; set; } public GenericRepository(DbContext context) { Context = context; DbSet = context.Set<T>(); } ...//methods excluded to keep the question readable } I am using a DI container to pass in the DbContext to the generic repository. So far, this has met my needs and there are no other concrete implmentations of IRepository<T>. However, I had to create a dashboard which uses data from many Entities. There was also a form containing a couple of dropdown lists. Now using the generic repository this makes the parameter requirments grow quickly. The Controller will end up being something like public HomeController(IRepository<EntityOne> entityOneRepository, IRepository<EntityTwo> entityTwoRepository, IRepository<EntityThree> entityThreeRepository, IRepository<EntityFour> entityFourRepository, ILogError logError, ICurrentUser currentUser) { } It has about 6 IRepositories plus a few others to include the required data and the dropdown list options. In my mind this is too many parameters. From a performance point of view, there is only 1 DBContext per request and the DI container will serve the same DbContext to all of the Repositories. From a code standards/readability point of view it's ugly. Is there a better way to handle this situation? Its a real world project with real world time constraints so I will not dwell on it too long, but from a learning perspective it would be good to see how such situations are handled by others.

    Read the article

  • Information Spilling Across Object Boundaries

    - by Winston Ewert
    Many times my business objects tend to have situations where information needs to cross object boundaries too often. When doing OO, we want information to be in one object and as much as possible all code dealing with that information should be in that object. However, business rules do not follow this principle giving me trouble. As an example suppose that we have an Order which has a number of OrderItems which refers to an InventoryItem which has a price. I invoke Order.GetTotal() which sums the result of OrderItem.GetPrice() which multiples a quantity by InventoryItem.GetPrice(). So far so good. But then we find out that some items are sold with a two for one deal. We can handle this by having OrderItem.GetPrice() do something like InventoryItem.GetPrice( quantity ) and letting InventoryItem deal with this. However, then we find out that the two-for-one deal only lasts for a particular time period. This time period needs to be based on the date of the order. Now we change OrderItem.GetPrice() to be InventoryItem.GetPrice( quatity, order.GetDate() ) But then we need to support different prices depending on how long the customer has been in the system: InventoryItem.GetPrice( quantity, order.GetDate(), order.GetCustomer() ) But then it turns out that the two-for-one deals apply not just to buying multiple of the same inventory item but multiple for any item in a InventoryCategory. At this point we throw up our hands and just give the InventoryItem the order item and allow it to travel over the object reference graph via accessors to get the information its needs: InventoryItem.GetPrice( this ) TL;DR I want to have coupling in objects, but business rules often force me to access information from all over the place in order to make particular decisions. Are there good techniques for dealing with this? Do others find the same problem?

    Read the article

  • Oracle OpenWorld Preview: JavaOne Social Developer Program

    - by kellsey.ruppel
    Originally posted by Jake Kuramoto on The Apps Lab blog. If you’re heading to San Francisco later this month for JavaOne and are interested in learning about building social applications for your enterprise, you should plan to check out the Social Developer Program, organized and hosted by Roland Smart http://twitter.com/rsmartx) who recently joined Oracle after the Involver acquisition. The program runs from 10 AM to 3:30 PM on Tuesday, October 2 at the San Francisco Hilton and features speakers from Oracle, Bit.ly, Facebook, LinkedIn, and Sociable Labs. The focus is on the emergence of social within the enterprise and ends with a hackathon. That last bit got your attention? Thought it might. Here’s the skinny: In this session the staff of the Oracle Social Developer Lab will present some social development tools that make integrating social functionality into your apps easier to achieve. This session kicks off a week-long hack to build an application using OSDL code. A winner will be selected and profiled in Java Magazine. I don’t have any more details on the prize, which is sure to be epic, so you’ll just have to attend the program. In the meantime, check out their Facebook page for more information. See you in San Francisco.

    Read the article

  • MySQL on Windows - how do I set the wait_timeout for connections using named pipes?

    - by gustafc
    I use a MySQL database running on a Windows box, and for performance reasons I'm connecting to it using named pipes. The (Java) application using the database (through Hibernate) can let the connection lie idle for quite a long time, which causes the connection to fail with the following message: com.mysql.jdbc.exceptions.jdbc4.CommunicationsException: The last packet successfully received from the server was 33 558 297 milliseconds ago. The last packet sent successfully to the server was 33 558 297 milliseconds ago. is longer than the server configured value of 'wait_timeout'. You should consider either expiring and/or testing connection validity before use in your application, increasing the server configured values for client timeouts, or using the Connector/J connection property 'autoReconnect=true' to avoid this problem. autoReconnect unfortunately has no effect (and neither does autoReconnectForPools), but the wait_timeout docs state that wait_timeout only applies "to TCP/IP and Unix socket file connections, not to connections made via named pipes, or shared memory". How can I change the wait_timeout for named pipes?

    Read the article

  • Benchmarks relevant for a Visual Studio .Net development workstation

    - by user30715
    I am developing a system with Windows 7-64, Visual Studio and Sharepoint on a virtual workstation on some kind of VMWare server. The system is painfully slow, with VS lagging behind when entering code, Intellisense lagging, opening and saving files takes ages when compared to a normal budget laptop. As far as I can see the virtual machine has OK specs and does not seem to be swapping etc., and the IT dept also says that they can't see anything wrong when they're monitoring the system. As long as the problem is not well-documented, the IT dept and management does not want to throw money (=upgraded laptops) at us, so I need to show some sort of benchmark. It has been many years since I did any system benchmarking, and I don't know the current benchmark software, so my question is which benchmark will be most relevant for Visual Studio performance? Not just for compiling fast, but also to reflect the "responsiveness" of the system. Cheers, user30715

    Read the article

  • FreeBSD Ports: How can I see all dependencies for a port, and all subdependencies for those dependencies?

    - by Stefan Lasiewski
    I'm trying to build a port which depends on apache-ant. I thought I could run make build-depends-list to see all dependencies required by this port: # make build-depends-list /usr/ports/devel/apache-ant /usr/ports/java/jdk16 /usr/ports/math/gmp But after installing everything, the port had a dependency list which was a mile long: apache-ant-1.8.1 desktop-file-utils-0.15_2 gamin-0.1.10_4 gettext-0.18.1.1 gio-fam-backend-2.26.1 glib-2.26.1_1 gmp-5.0.1 inputproto-2.0 javavmwrapper-2.3.5 kbproto-1.0.4 libX11-1.3.3_1,1 libXau-1.0.5 libXdmcp-1.0.3 libXext-1.1.1,1 libXi-1.3,1 libXtst-1.1.0 libiconv-1.13.1_1 libpthread-stubs-0.3_3 libxcb-1.7 pcre-8.12 perl-5.10.1_3 pkg-config-0.25_1 python26-2.6.6 recordproto-1.14 unzip-6.0 xextproto-7.1.1 xproto How can I see all dependencies, and all subdependencies for a port?

    Read the article

  • Can I become a Game Designer? [on hold]

    - by user32721
    This is my first time posting something on a forum in 4 years. I am posting this because I want to adjust my expectations and goals regarding game design. I am in college in Morocco (Al Akhawayn university). just started my junior year. I am a communications major (school of humanities) and a gender studies minor. I want to become a video game designer. It is the only career that I am interested in. I have been playing ever since I was 5 and haven't stopped yet. Currently I don't have any noteworthy skills to become a designer. I don't know how to program (don't really have the patience for it) and I can't draw to save my life. I haven't tried visual software like MAYA or MAX so I can't comment on graphic design. So I basically want to know whether my current education is capable of helping me reach my goal. If not then should I take a master's in game design (in the U.S?) or switch my minor to computer science? I am sorry that this post is long! I look forward to hearing your advice!

    Read the article

  • What is the best way to track / record the current programming project you work on? [duplicate]

    - by user2424160
    This question already has an answer here: Methodology for Documenting Existing Code Base 6 answers When do you start documenting the code? 13 answers Where should a programmer explain the extended logic behind the code? 5 answers I have been in this problem for long time and I want to know how it's done in real / big companies project? Suppose I have the project to build a website. Now I divide the project into sub tasks and do it. But you know that suppose I have task1 in hand like export the page to pdf. Now I spend 3 days to do that , came across various problems, many Stack Overflow questions and in the end I solve it. Now 4 months after someone told me that there is some error in the code. Now by that I completely forgot about (60%) of how I did it and why I do this way. I document the code but I can't write the whole story of that in the code. Then I have to spend much time on code to find what was the problem so that I added this line etc. I want to know that is there any way that i can log steps in completing the project. So that I can see how I end up with code, what errors I got, what questions I asked on SO and etc. How people do it in real time? Which software to use? I know in our project management software called JIRA we have tasks but that does not cover what steps I took to solve that tasks. What is the best way so that when I look back at my 2 year old project, I know how I solve particular task?

    Read the article

  • How can I make an actual compact cassette "tape" mix-tape from iTunes?

    - by MikeN
    I want to make an actual compact cassette mix-tape as a gift for someone. I use iTunes to manage all of my music. So a few questions: If I gather a bunch of songs on a playlist for sides A/B of the tape, how can I ensure that the volume for all songs is the same as it plays on the tape? I was thinking of finding an old compact casette recorder and putting the single line sterio output of my Mac to the casette's microphone input. Is that a good way to record onto the actual tape? How long is each side of a compact tape? Is there a default speed the tape plays at? Let's say I want to mesh some songs together so that they will completely fill up one side of a tape (cut off 10 seconds off the end of one song or the beginning of another song), what's the best way to do that?

    Read the article

  • How to install pecl uploadprogress on Debian Lenny

    - by kidrobot
    I am getting this output/error for # pecl install uploadprogress downloading uploadprogress-1.0.1.tgz ... Starting to download uploadprogress-1.0.1.tgz (8,536 bytes) .....done: 8,536 bytes 4 source files, building running: phpize Configuring for: PHP Api Version: 20041225 Zend Module Api No: 20060613 Zend Extension Api No: 220060519 building in /var/tmp/pear-build-root/uploadprogress-1.0.1 running: /tmp/pear/temp/uploadprogress/configure checking for grep that handles long lines and -e... /bin/grep checking for egrep... /bin/grep -E checking for a sed that does not truncate output... /bin/sed checking for gcc... no checking for cc... no checking for cl.exe... no configure: error: no acceptable C compiler found in $PATH See `config.log' for more details. ERROR: `/tmp/pear/temp/uploadprogress/configure' failed php-pear is installed. I'm stumped.

    Read the article

  • How to open first n bytes of file in hexadecimal and edit it?

    - by Larssend
    I want to edit some .avi videos (cut them, to be precise) in VirtualDub but it failed to open the files. They are encoded in xvid, which I have installed, and play in KMPlayer without problem. Also, all other xvid videos can be opened and cut just fine by VirtualDub. I suspect there's something wrong in the first few bytes of these particular videos (the magic number?). This means I have to open the offending files in a hex editor and make some necessary adjustments to the header. Problem is, they are very large ( 3 GB each) and take very long time to open in UltraEdit. Can UltraEdit open just the first few bytes of a file? If not, do you know of an application that can do that? Edit: I'm using Windows XP.

    Read the article

  • Configuration Tuning for PostgreSQL 9.1 PostGIS 1.5 Ubuntu 12.04 Server

    - by Martin
    My server performance is poor. At times SSH, top, and other features or commands are very slow to respond, taking several seconds or more. A query that normally takes 5 minutes can sometimes take 30 minutes. The database is mostly being used to do a spatial query (grid and summarize) on approximately 500GB of stored data spread between 4 tables. Restarting the server works as a temporary fix, but cannot be used as a long term solution. Any suggestions for how to diagnose and solve my performance issues? Hardware and Configuration: 3.3 GHz Intel quad core i5 16 GB DDR3 RAM 6 TB software RAID 10 (6 x 2 TB drives) Ubuntu 12.04 64-bit Postgres 9.1 PostGIS 1.5

    Read the article

  • How to sync Google Chrome Web Browser and Google Bookmarks

    - by Cawas
    Google Chrome has its own bookmarks that are synced across browsers using Google Docs. And Google also have Google Bookmarks that works very well with google search in itself. It has been a long discussion in many places on why they don't get along. I know similar questions have been asked, but I want to leave this open until a good solution comes around, unlike this one. So, is the perfect way to sync google chrome bookmarks and google bookmarks already born? Next step: sync with delicious!

    Read the article

  • Which keyboard has better ergomics?

    - by Absolute0
    When I was a kid I fell hard on my right wrist and since then I always get wrist pains when angling my wrist very high up (ie: when using a very high shaped mouse or doing push ups). So I have narrowed down my choices for a keyboard to the following 2: Microsoft Natural 4000: And the Razer Arctosa: The Razer is a slim type keyboard similar to a laptop feel and the hand-rest would help with keeping my hands straight with respect to my forearms. I am more inclined on getting the razer but am not sure if this will benefit my wrists in the long run. Any thoughts on this would be greatly appreciated. Thanks.

    Read the article

  • Password manager with checking for expired logins and passwords

    - by ldigas
    I, like most in here, use a password manager for keeping all kinds of login/pass informations inside. And over time, the heap of passwords started to grow, and grow, and now it's on about 350 (give or take) entries. The problem is, most of these have been temporary, for example, for login into forums which I wanted to visit, and never come back again; same with some pages and so on ... and because of that, every now and then I come onto a password that's long gone expired. So I was wondering, is there a utility out there that can check which of these has actually expired by logging in, and logging out ? I know this is a relatively complicated operation (auto filling doesn't always works and so on, how to log out ... etc.) , but still ... maybe someone knows.

    Read the article

  • CPU spikes on small site- possibly apache or php config related

    - by Mike
    Hello, I hope you can help me. I have a site that I'm moving to a new datacenter. The server is pretty much vanilla, no control panel, and also no optimizations. When I hit a page, the site takes an extremely long time to load, despite it being relatively light weight. I ran top to see what was happening, and the cpu jumps to 75%, and drops back down to about 20% while the rest of the page is loading. Someone suggested that I ran lsof -p on the offending processes, but I'm not sure what I'm looking at. I ran through my httpd.conf file and commented out a bunch of loaded modules that didn't seem necessary, but that didn't help either. Anyone have any ideas? Output of the lsof http://pastebin.com/mfa113f

    Read the article

  • Proper way to re-image Windows 7?

    - by Alec
    I had a driver completely fail on me so I have to restore my computer from a system image backup. I used an installation DVD to run the Re-Image utility on there, but after 8 hours of "preparing the image" to be restored, it began restoring to my hard drive. After 12 hours there, it was 5-8% complete. I figured I must have done something wrong or it started doing something wrong. So I installed a fresh copy of Win-7 and ran the utility from there. It's going at the same snail's pace. I still think I must be doing something incorrectly - I don't see how it could possibly take so long, I could probably manually flip the on my hard drive and be done before that utility. Am I doing this correctly or is there something else I should be doing? Edit: In case my hardware is relevant: Win 7 64 bit Core i7 8 GB Ram 640GB Internal 1TB External connected via eSATA I had approximately 400GB of data on my computer before it crashed.

    Read the article

  • Distinguishing the Transport Layer from the Session Layer

    In the OSI communications model, the Session layer manages the creation and removal of the association between two communicating network end points. This can also be called a connection. A connection is maintained while the network end points are communicated between each other in a conversation or session of some unknown length of time. Some connections and sessions only need to last long enough to send a message or a piece of data in one direction. In addition, some sessions may last longer, this typically occurs when one or both of the network end points are  able to terminate it the connection. The transport layer ensures that messages/data are delivered without errors, in sequence, and with no data content losses or data content duplications. It relieves the Session Layer from any concern with the transfer of data between them and their peers. Examples: Session Layer This layer acts like a manager and asks one of its employees (Transport Layer) to move a piece of data/message from one network end-point to another. Transportation Layer This layer takes the request of the Session Layer and moves the data as insturcted, it also double checks the data for any missing or corrupted information after it has been moved. If for some reason the new data does not match the old data then the Transport player will attempt to move the data gain until the data at both locations is in sync.

    Read the article

  • TCO Comparison: Oracle Exadata vs IBM P-Series

    - by Javier Puerta
    Cost Comparison for Business Decision-makersOracle Exadata Database Machine vs. IBM Power SystemsHow to Weigh a Purchase DecisionOctober 2012 Download full report here In this research-based  white paper conducted at the request of Oracle, The FactPoint Group compares the cost of ownership of the Oracle Exadata engineered system to a traditional build-your-own (BYO) solution, in this case an IBM Power 770 (P770) with SAN storage.  The IBM P770 was chosen given it is IBM’s current most popular model, based on FactPoint primary and secondary research and IBM claims, and because at least one of the interviewed customers had specifically migrated from a P770 to Exadata, affording us a more specific data point for comparison. This research found that Oracle Exadata: Can be deployed more quickly and easily requiring 59% fewer man-hours than a traditional IBM Power Systems solution. Delivers dramatically higher performance typically up to 12X improvement, as described by customers, over their prior solution.  Requires 40% fewer systems administrator hours to maintain and operate annually, including quicker support calls because of less finger-pointing and faster service with a single vendor.  Will become even easier to operate over time as users become more proficient and organize around the benefits of integrated infrastructure. Supplies a highly available, highly scalable and robust solution that results in reserve capacity that make Exadata easier for IT to operate because IT administrators can manage proactively, not reactively.  Overall, Exadata operations and maintenance keep IT administrators from “living on the edge.”  And it’s pre-engineered for long-term growth. Finally, compared to IBM Power Systems hardware, Exadata is a bargain from a total cost of ownership perspective:  Over three years, the IBM hardware running Oracle Database cost 31% more in TCO than Exadata.

    Read the article

< Previous Page | 309 310 311 312 313 314 315 316 317 318 319 320  | Next Page >