Search Results

Search found 15863 results on 635 pages for 'made in india'.

Page 243/635 | < Previous Page | 239 240 241 242 243 244 245 246 247 248 249 250  | Next Page >

  • mysql my.cnf ignored

    - by mr12086
    [issue] I'm trying to modify a my.cnf value on my production server but the changes aren't taking effect after a sudo service mysql restart, using an exact copy of the my.cnf (downloaded and replaced original) on my development server the changes made are visible from show variables in mysql commandline. my.cnf is located at /etc/mysql/my.cnf sudo find / -name my.cnf /etc/mysql/my.cnf So only one file exists on the entire system.. Production is ubuntu 10.04 LTS 64bit Development is ubuntu 11.10 32bit Mysql versions are 5.1.61 & 5.1.62 respectively. Kind Regards, [my.cnf] yes it seems to have had all the comments removed and replaced with whitespace. [client] port = 3306 socket = /var/run/mysqld/mysqld.sock [mysqld_safe] socket = /var/run/mysqld/mysqld.sock nice = 0 [mysqld] user = mysql socket = /var/run/mysqld/mysqld.sock port = 3306 basedir = /usr datadir = /var/lib/mysql tmpdir = /tmp skip-external-locking bind-address = 127.0.0.1 key_buffer = 16M max_allowed_packet = 16M thread_stack = 192K thread_cache_size = 8 myisam-recover = BACKUP query_cache_limit = 1M query_cache_size = 16M log_error = /var/log/mysql/error.log expire_logs_days = 10 max_binlog_size = 100M innodb_file_per_table = 1 [mysqldump] quick quote-names max_allowed_packet = 16M [mysql] [isamchk] key_buffer = 16M !includedir /etc/mysql/conf.d/

    Read the article

  • Managing user privileges, best practice.

    - by Loïc N.
    I'm am new to web development. I'm creating a website where different user can have different privileges, such as creating/editing/deleting a news, or adding/editing/deleting whatever kind of content on the website. I started by creating a "user type" that would indicate the user's privileges (such as "user", "newser", "moderator", "admin", and so on), but i quickly started noticing issues that made me think that this might be a naive approach to this issue. What if i want to give a regular user the right to edit a news (for whatever reason)? Then the user would be half "user", half "newser". But the system i use can only handle one user-type. So what would be the best practice here? I was thinking of removing the concept of roles (or "user-types" such as newser) and only have the concept of "privilege", where every user could have zero to many privileges. So, to re-use the above example, if i wanted a user to have the right to edit some news, i would only have to give him a "edit news" privilege. Is this the way to go?

    Read the article

  • JavaServer Faces 2.0 for the Cloud

    - by Janice J. Heiss
    A new article now up on otn/java by Deepak Vohra titled “JSF 2.0 for the Cloud, Part One,” shows how JavaServer Faces 2.0 provides features ideally suited for the virtualized computing resources of the cloud. The article focuses on @ManagedBean annotation, implicit navigation, and resource handling. Vohra illustrates how the container-based model found in Java EE 7, which allows portable applications to target single machines as well as large clusters, is well suited to the cloud architecture. From the article-- “Cloud services might not have been a factor when JavaServer Faces 2.0 (JSF 2.0) was developed, but JSF 2.0 provides features ideally suited for the cloud, for example:•    The path-based resource handling in JSF 2.0 makes handling virtualized resources much easier and provides scalability with composite components.•    REST-style GET requests and bookmarkable URLs in JSF 2.0 support the cloud architecture. Representational State Transfer (REST) software architecture is based on transferring the representation of resources identified by URIs. A RESTful resource or service is made available as a URI path. Resources can be accessed in various formats, such as XML, HTML, plain text, PDF, JPEG, and JSON, among others. REST offers the advantages of being simple, lightweight, and fast.•    Ajax support in JSF 2.0 is integrable with Software as a Service (SaaS) by providing interactive browser-based Web applications.” In Part Two of the series, Vohra will examine features such as Ajax support, view parameters, preemptive navigation, event handling, and bookmarkable URLs.Have a look at the article here.

    Read the article

  • HTTP/1.1 Status Codes 400 and 417, cannot choose which

    - by TheDeadLike
    I have been referred to here that it might be of better help, I've got a processing file which handles the user sent data, before that, however, it compares the input from client to the expected values to ensure no client-side data change. I can say I don't know lot about HTTP status codes, but I have made up some research on it, and to choose which one is the best for unexpected input handling. So I came up with: 400 Bad Request: The request cannot be fulfilled due to bad syntax 417 Expectation Failed: The server cannot meet the requirements of the Expect request-header field Now, I cannot be really sure which one to use, I have seen 400 Bad Request being used alot, however, whatI get from explanation is that the error is due to an unexistent request rather than an illegal input. On the other side 417 Expectation Failed seems to just fit for my use, however, I have never seen or experimented this header status before. I need your experience and opinions, thanks alot! For a full detailed with form/process page drafts, and my experiments, follow this link.

    Read the article

  • Have SSIS' differing type systems ever caused you problems?

    - by jamiet
    One thing that has always infuriated me about SSIS is the fact that every package has three different type systems; to give you an idea of what I am talking about consider the following: The SSIS dataflow's type system is made up of types called DT_*  (e.g. DT_STR, DT_I4) The SSIS variable type system is based on .Net datatypes (e.g. String, Int32) The types available for Execute SQL Task's parameters are based on something else - I don't exactly know what (e.g. VARCHAR, LONG) Speaking euphemistically ... this is not an optimum situation (were I not speaking euphemistically I would be a lot ruder) and hence I have submitted a suggestion to Connect at [SSIS] Consolidate three type systems into one requesting that it be remedied. This accompanying blog post is not however a request for votes (though that would be nice); the reason is actually subtler than that. Let me explain. I have been submitting bugs and suggestions pertaining to SSIS for years and have, so far, submitted over 200 Connect items. If that experience has taught me anything it is this - Connect items are not generally actioned because they are considered "nice to have". No, SSIS Connect items get actioned because they cause customers grief and if I am perfectly honest I must admit that, other than being a bit gnarly, SSIS' three type system architecture has never knowingly caused me any significant problems. The reason for this blog post is to ask if any reader out there has ever encountered any problems on account of SSIS' three type systems or have you, like me, never found them to be a problem? Errors or performance degredation caused by implicit type conversions would, I believe, present a strong case for getting this situation remedied in a future version of SSIS so if you HAVE encountered such problems I would encourage you to leave a comment on the Connect submission accordingly. Let me know in the comments too - I would be interested to hear others' opinions on this. @Jamiet

    Read the article

  • Dealing with "jumping" sprites: badly centered?

    - by GigaBass
    Thing is, I've used darkFunction Editor as a way to get all the spriteCoordinates off a spriteSheet for each individual sprite, and parse the .xml it generates inside my game. It all works fine, except when the sprites are all similarly sized, but when a sprite changes from a small sprite into a big one, such as here: When from walking from some direction, to attacking, it starts "jumping", appearing glitchy, because it's not staying in the same correct position, only doing so for the right attacking sprite, due to the drawing being made from the lower left part of the rectangle. I think someone experienced will immediately recognize the problem I mean, if not, when I return home soon, I will shoot a little youtube video demonstrating the issue! So the question is: what possible solutions are there? I've thought that some sort of individual frame "offset" system might be the answer, or perhaps splitting, in this case, the sprite in 2: the sword, and the character itself, and draw sword according to character's facing, but that might be overly complex. Another speculation would be that there might be some sort of method in LibGdx, the library I'm using, that allows me to change the drawing center (which I looked for and didn't find), so I could choose from where the drawing starts.

    Read the article

  • Bug with Set / Get Accessor in .Net 3.5

    - by MarkPearl
    I spent a few hours scratching my head on this one... So I thought I would blog about it in case someone else had the same headache. Assume you have a class, and you are wanting to use the INotifyPropertyChanged interface so that when you bind to an instance of the class, you get the magic sauce of binding to do you updates to the UI. Well, I had a special instance where I wanted one of the properties of the class to add some additional formatting to itself whenever someone changed its value (see the code below).   class Test: INotifyPropertyChanged {     private string_inputValue;     public stringInputValue     {         get        {             return_inputValue;         }         set        {             if(value!= _inputValue)             {                 _inputValue = value+ "Extra Stuff";                 NotifyPropertyChanged("InputValue");                     }         }     }     public eventPropertyChangedEventHandler PropertyChanged;     public voidNotifyPropertyChanged(stringinfo)     {         if(PropertyChanged != null)         {             PropertyChanged(this, newPropertyChangedEventArgs(info));         }     } }   Everything looked fine, but when I ran it in my WPF project, the textbox I was binding to would not update? I couldn’t understand it! I thought the code made sense, so why wasn’t it working? Eventually StackOverflow came to the rescue, where I was told that it was a bug in the .Net 3.5 Runtime and that a fix was scheduled in .Net 4 For those who have the same problem, here is the workaround… You need to put the NotifyPropertyChanged method on the application thread! public string InputValue { get { return _inputValue; } set { if (value != _inputValue) { _inputValue = value + "Extra Stuff"; // // React to the type of measurement // Application.Current.Dispatcher.BeginInvoke((Action)delegate { NotifyPropertyChanged("InputValue"); }); } } }

    Read the article

  • SSH keys fail for one user

    - by Eli
    I just set up a new Debian server. I disabled root SSH and password auth, so you've gotta use a key file. For my primary user, everything works exactly as expected. I used ssh-keygen -t dsa and got myself a public and private key. Put one in authorized keys, put the other in a pem file locally. I wanted to create a user that I can deploy things with, so I did basically the same process. I addusered it, made a .ssh folder, ran ssh-keygen -t dsa (I also tried RSA), put the keys in their appropriate locations. No luck. I'm getting a Permission denied (publickey) error. When I use the exact same keys as the account that works, same error. When I enable password authentication, I can log in via SSH with the password. How do I debug this?

    Read the article

  • Formatted a Bootcamped drive as a dynamic disk, now can't boot to either Mac or Windows

    - by Steven H
    I was trying to create an extra partition to get a file from the Windows side of my Macbook Air to the Mac side, and I accidentally made the disk dynamic without realizing it. I am now unable to boot to the Mac side (holding Alt to go into the system manager at startup doesn't even list the Mac partition), and the Windows side blue screens during boot (goes so quickly that it doesn't even get to the error code before restarting). What can I do to fix the issue? I don't know how to make a bootable flash drive that a Mac will recognize, and Disk Utility (via Internet Recovery) couldn't do anything. (cross-posted from apple.stackexchange)

    Read the article

  • Evil DRY

    - by StefanSteinegger
    DRY (Don't Repeat Yourself) is a basic software design and coding principle. But there is just no silver bullet. While DRY should increase maintainability by avoiding common design mistakes, it could lead to huge maintenance problems when misunderstood. The root of the problem is most probably that many developers believe that DRY means that any piece of code that is written more then once should be made reusable. But the principle is stated as "Every piece of knowledge must have a single, unambiguous, authoritative representation within a system." So the important thing here is "knowledge". Nobody ever said "every piece of code". I try to give some examples of misusing the DRY principle. Code Repetitions by Coincidence There is code that is repeated by pure coincidence. It is not the same code because it is based on the same piece of knowledge, it is just the same by coincidence. It's hard to give an example of such a case. Just think about some lines of code the developer thinks "I already wrote something similar". Then he takes the original code, puts it into a public method, even worse into a base class where none had been there before, puts some weird arguments and some if or switch statements into it to support all special cases and calls this "increasing maintainability based on the DRY principle". The resulting "reusable method" is usually something the developer not even can give a meaningful name, because its contents isn't anything specific, it is just a bunch of code. For the same reason, nobody will really understand this piece of code. Typically this method only makes sense to call after some other method had been called. All the symptoms of really bad design is evident. Fact is, writing this kind of "reusable methods" is worse then copy pasting! Believe me. What will happen when you change this weird piece of code? You can't say what'll happen, because you can't understand what the code is actually doing. So better don't touch it anymore. Maintainability just died. Of course this problem is with any badly designed code. But because the developer tried to make this method as reusable as possible, large parts of the system get dependent on it. Completely independent parts get tightly coupled by this common piece of code. Changing on the single common place will have effects anywhere in the system, a typical symptom of too tight coupling. Without trying to dogmatically (and wrongly) apply the DRY principle, you just had a system with a weak design. Now you get a system which just can't be maintained anymore. So what can you do against it? When making code reusable, always identify the generally reusable parts of it. Find the reason why the code is repeated, find the common "piece of knowledge". If you have to search too far, it's probably not really there. Explain it to a colleague, if you can't explain or the explanation is to complicated, it's probably not worth to reuse. If you identify the piece of knowledge, don't forget to carefully find the place where it should be implemented. Reusing code is never worth giving up a clean design. Methods always need to do something specific. If you can't give it a simple and explanatory name, you did probably something weird. If you can't find the common piece of knowledge, try to make the code simpler. For instance, if you have some complicated string or collection operations within this code, write some general-purpose operations into a helper class. If your code gets simple enough, its not so bad if it can't be reused. If you are not able to find anything simple and reasonable, copy paste it. Put a comment into the code to reference the other copies. You may find a solution later. Requirements Repetitions by Coincidence Let's assume that you need to implement complex tax calculations for many countries. It's possible that some countries have very similar tax rules. These rules are still completely independent from each other, since every country can change it of its own. (Assumed that this similarity is actually by coincidence and not by political membership. There might be basic rules applying to all European countries. etc.) Let's assume that there are similarities between an Asian country and an African country. Moving the common part to a central place will cause problems. What happens if one of the countries changes its rules? Or - more likely - what happens if users of one country complain about an error in the calculation? If there is shared code, it is very risky to change it, even for a bugfix. It is hard to find requirements to be repeated by coincidence. Then there is not much you can do against the repetition of the code. What you really should consider is to make coding of the rules as simple as possible. So this independent knowledge "Tax Rules in Timbuktu" or wherever should be as pure as possible, without much overhead and stuff that does not belong to it. So you can write every independent requirement short and clean. DRYing try-catch and using Blocks This is a technical issue. Blocks like try-catch or using (e.g. in C#) are very hard to DRY. Imagine a complex exception handling, including several catch blocks. When the contents of the try block as well as the contents of the individual catch block are trivial, but the whole structure is repeated on many places in the code, there is almost no reasonable way to DRY it. try { // trivial code here using (Thingy thing = new thingy) { //trivial, but always different line of code } } catch(FooException foo) { // trivial foo handling } catch (BarException bar) { // trivial bar handling } catch { // trivial common handling } finally { // trivial finally block } The key here is that every block is trivial, so there is nothing to just move into a separate method. The only part that differs from case to case is the line of code in the body of the using block (or any other block). The situation is especially interesting if the many occurrences of this structure are completely independent: they appear in classes with no common base class, they don't aggregate each other and so on. Let's assume that this is a common pattern in service methods within the whole system. Examples of Evil DRYing in this situation: Put a if or switch statement into the method to choose the line of code to execute. There are several reasons why this is not a good idea: The close coupling of the formerly independent implementation is the strongest. Also the readability of the code and the use of a parameter to control the logic. Put everything into a method which takes a delegate as argument to call. The caller just passes his "specific line of code" to this method. The code will be very unreadable. The same maintainability problems apply as for any "Code Repetition by Coincidence" situations. Enforce a base class to all the classes where this pattern appears and use the template method pattern. It's the same readability and maintainability problem as above, but additionally complex and tightly coupled because of the base class. I would call this "Inheritance by Coincidence" which will not lead to great software design. What can you do against it: Ideally, the individual line of code is a call to a class or interface, which could be made individual by inheritance. If this would be the case, it wouldn't be a problem at all. I assume that it is no such a trivial case. Consider to refactor the error concept to make error handling easier. The last but not worst option is to keep the replications. Some pattern of code must be maintained in consistency, there is nothing we can do against it. And no reason to make it unreadable. Conclusion The DRY-principle is an important and basic principle every software developer should master. The key is to identify the "pieces of knowledge". There is code which can't be reused easily because of technical reasons. This requires quite a bit flexibility and creativity to make code simple and maintainable. It's not the problem of the principle, it is the problem of blindly applying a principle without understanding the problem it should solve. The result is mostly much worse then ignoring the principle.

    Read the article

  • Sprite and Physics components or sub-components?

    - by ashes999
    I'm taking my first dive into creating a very simple entity framework. The key concepts (classes) are: Entity (has 0+ components, can return components by type) SpriteEntity (everything you need to draw on screen, including lighting info) PhysicsEntity (velocity, acceleration, collision detection) I started out with physics notions in my sprite component, and then later removed them to a sub-component. The separation of concerns makes sense; a sprite is enough information to draw anything (X, Y, width, height, lighting, etc.) and physics piggybacks (uses the parent sprite to get X/Y/W/H) while adding physics notions of velocity and collisions. The problem is that I would like collisions to be on an entity level -- meaning "no matter what your representation is (be it sprites, text, or something else), collide against this entity." So I refactored and redirected collision handling from entities to sprite.physics, while mapping and returning the right entity on physics collisions. The problem is that writing code like this.GetComponent<SpriteComponent>().physics is a violation of abstraction. Which made me think (this is the TLDR): should I keep physics as a separate component from sprites, or a sub-component, or something else? How should I share data and separate concerns?

    Read the article

  • Accessing A Shared Directory That Has An Account White List

    - by Xan
    I'm on a LAN here at work and I have my desktop sharing some of my project folders. I can access the computer via \\ComputerIP\, but I can't actually open any of the folders. Upon attempting, I get the error: Windowns cannot access \ComputerIP\ProjectFolder You do not have permission to access \ComputerIP\ProjectFolder. Contact your network administrator to request access. For more information about permissions, see Windows Help and Support Now, this is understood considering I've made it so that you had to utilize the "Project" credentials to connect. I have a user account on my main computer hosting these shared folders that gives full access to the folders if you are this "Project" user. I can Remote Desktop the computer just fine from my laptop using either set of credentials. When I try to open these folders it doesn't give me the option to attempt to apply any credentials like it does when I remote desktop. How am I supposed to gain access to these folders?

    Read the article

  • EC2 instance store cloning or to ebs via gui management console

    - by devnull
    I have found similar questions here but the answer are either outdated or are from the command line. The case is this. I have an EC2 instance using instance store (this was the only AMI available for Debian 6 in Ireland). Now through the AWS GUI I can do a snapshot of the instance volume and/or even create a volume. But an image made from the snapshot doesn't boot. What is the best solution to either clone an EC2 instance that uses instance store OR from the created snapshot of the instance store to launch a new EBS instance (identical clone) FROM the gui aws management console and not command line ? Before turning this down consider that there is not similar question on how to do it via the aws management console. hint can't be done is not an appropriate answer. As you can create a snapshot of the instance store backed instance and/or a volume and create an AMI from that snapshot.

    Read the article

  • Ctrl-C not killing process in CMD.EXE

    - by jtl999
    I've had this issue for a while even after reinstalling. Issue happens after I reinstall all my programs and not in a fresh Windows install (obviously). Might have to spin up a VM and install each program 1 by 1. I suspect it's Git for Windows with it's mini Cygwin wrapper causing this issue. Anyway the issue is basically pressing Ctrl-C does not kill the running process. However when I run cmd.exe or Git Bash or administrator Ctrl-C works great again. Disabling UAC seems to break it again. I've made a video of the issue here. Many thanks.

    Read the article

  • Why won't vsftpd let me log in with a virtual user account?

    - by Ramon
    I would like to use vsftpd with virtual users and pam_pwdfile.so. I installed vsftpd and added two users (ramon and dragon) via htpasswd to my file /etc/vsftpd.passwd. The /etc/pam.d/vsftpd is configured to use this file. auth required pam_listfile.so item=user sense=deny file=/etc/ftpusers onerr=succeed auth required pam_pwdfile.so pwdfile /etc/vsftpd.passwd account required pam_permit.so @include common-account @include common-session The user "ramon" is also available in /etc/passwd. A login to the ftp with the user "ramon" works as expected. But a login using "dragon" does not :/ The result is always Login failed: 530 Login incorrect. Since it's possible that I made a mistake I tried the exact way documented in /usr/share/doc/vsftpd/examples/VIRTUAL_USERS/README. Still no luck. I can login with the user "ramon", but not with the user "dragon". Any ideas?

    Read the article

  • Copy wrongs and Copyright

    - by Tony Davis
    Recently, a Chinese blog website copied, wholesale and without permission, a Simple-Talk article on troubleshooting locking and blocking. Our initial reaction was exasperation and anger, tempered slightly by the fact that there was, at the top, a clear link to the original, and the book from which it was extracted. On the day the copy was posted, our original article saw a 30K spike in visits, so the site clearly has a substantial following! This made us pause for thought. Indeed, we wondered whether it might not be more profitable, and certainly more enjoyable, to notify the offender of similar content and serve a "put up" notice, rather than the usual DMCA "take down" . The DMCA request, issued to protect our and our authors' assets, is a necessary but tiresome, chore. So often, simple communication and negotiation could have averted the need for it. We are, after all, in the business of presenting knowledge, information and help to the SQL Server Community. If only they had asked! Of course, one's attitude changes according to the motivation behind the copying of content. One of the motivations seems to be pure vanity; they do it to try to enhance their CV, or their company's expertise, by pretending to expertise they don't possess. There is a class of plagiariser, however, that is doing it purely for money, getting advertising revenue by attracting hapless readers to their site. Not content with stealing content, sites can invest in services that provide 'load-testing' for websites that is so realistic that even the search engines can be fooled. Stolen content, fake visitors, swindled advertisers. Zero-tolerance is really the only way of dealing with plagiarism, and action will only be completely effective once Bing, Google, and the other search engines strike out from their listings the rogue sites that refuse to take down plagiarised content. It is, after all in everyone else's interests. Cheers, Tony.

    Read the article

  • tar incremental backup is backing everything up, every time when used on the Dropbox directory

    - by Cyclic
    I made an incremental backup about 10 months ago (on Jan 27, 2013), creating a .snar metadata file. Now, when I try to make an incremental backup using tar --create --file=dropbox_incremental_1.tar --listed-incremental=dropbox_0.snar Dropbox the command just re-backs up everything. I'm not an expert at Unix timestamps, but I noticed that virtually all of my directory timestamps are way more recent than the last time they changed. For my actual files, they look like this: Access: 2013-03-12 19:04:51.000000000 -0500 Modify: 2012-09-30 15:10:47.000000000 -0500 Change: 2013-03-12 19:04:51.306209672 -0500 The 'Modify' timestamp seems correct, but the files were definitely not changed (at least not doing anything that I know of) at the time they say they were. These files still seem to go into the incremental archive. What's happening here? Is there a way to tell tar to look at the 'modify' timestamp? Isn't this what it's supposed to be doing?

    Read the article

  • Caching of path environment variable on windows?

    - by jwir3
    I'm assisting one of our testers in troubleshooting a configuration problem on a Windows XP SP3 system. Our application uses an environment variable, called APP_HOME, to refer to the directory where our application is installed. When the application is installed, we utilize the following environment variables: APP_HOME = C:\application\ PATH = %PATH%;%APP_HOME%bin Now, the problem comes in that she's working with multiple versions of the same application. So, in order to switch between version 7.0 and 8.1, for example, she might use: APP_HOME = C:\application_7.0\ (for 7.0) and then change it to: APP_HOME = C:\application_8.1\ (for 8.1) The problem is that once this change is made, the PATH environment variable apparently still is looking at the old expansion of the APP_HOME variable. So, for example, after she has changed APP_HOME, PATH still refers to the 7.0 bin directory. Any thoughts on why this might be happening? It looks to me like the PATH variable is caching the expansion of the APP_HOME environment variable. Is there any way to turn this behavior off?

    Read the article

  • Why are two indicator-network versions being worked on?

    - by Daniel Rodrigues
    Some months ago, on the road to Ubuntu Maverick, a new system indicator, network (with connman as a backend), started to be developed. The plan was to get it into UNE and release it with no notifcation area. Unfortunately it didn't make it into the final version. However, continued efforts are still being made to improve it, and I'm getting regular updates. From a blueprint from the last UDS, I read that the plan was to ship no notification area and only indicators. For that, it was defined that nm-applet (backend: NetworkManager) should be ported to the appindicator library. Today I discovered that those efforts are going on and a initial version is available for testing, available from Matt Trudel PPA (Natty only). So, my questions is, to whoever has the necessary info: wouldn't it be easier to join efforts and concentrate the work in just one version (probably NetworkManager backend, as that's the official plan), instead of breaking those efforts apart and hampering both testing and developing? Both indicators are being developed by Canonical engineers, and that really doesn't make much sense. So, any Canonical engineer willing to clarify this?

    Read the article

  • Can't change to Korean-named directory on my debian server

    - by DaLynX
    I made a rsync backup of some directories from a macbook laptop to a debian server. Some of these have korean characters (Hangeul) in their names. After fixing my server's locale, it displays well when I do a ls for instance. But I can't cd to it. Example: $ ls -1 | head ??? dirA dirB … But if try to go browse that directory: $ cd ? ? ? cd: 3: can't cd to ??? Any idea what's wrong and how to fix it ?

    Read the article

  • Snap Server 18000 connection help!

    - by sicko666
    I wonder if anyone here can help me. I have a home server setup made up of old secondhand computers, 2 servers running Windows Server 2003, 1 workstation running Windows 7, a 16 port switch & an adsl ethernet modem. All these connect and talk to each other fine but then I got a "Snap Server 18000" and a "Snap disk 30sa" sata array. When I turn the Snap on, it boots past the BIOS, runs a kernel, then displays: This device cannot be managed via the video/kbd/mouse interface. The video is now disabled. You may access the management functions from your web browser. Only, none of the other PCs detect it, so no browser can find it! I have checked all cables, and all LEDs indicate there's a connection. I have installed the windows "iscsi" and the adaptec "Snap Server Manager" on all PCs but still it's not detected. I don't know what else to do, please advise!

    Read the article

  • Error pushing to remote with git

    - by pcm2a
    I have a fresh Centos 6 server stood up and I have installed git version 1.7.1 through yum. I am using the smart http method through apache for access. When I try to push to the remote server this is what I get: $ git push origin master Password: Counting objects: 6, done. Compressing objects: 100% (3/3), done. Writing objects: 100% (6/6), 436 bytes, done. Total 6 (delta 0), reused 0 (delta 0) error: unpack failed: index-pack abnormal exit I have tried these things which made no difference: chown -R apache:apache /path/to/git/repository (httpd runs as apache) chown -R apache:users /path/to/git/repository chmod -R 777 /path/to/git/repository (obviously not secure but wanted to eliminate this being a file permission problem) What can I try to get pushing to work?

    Read the article

  • Coders For Charities

    - by Robz / Fervent Coder
    Last weekend I had the opportunity to give back to the community doing what I love. As geeks we don’t usually have this opportunity. The event is called Coders 4 Charities (C4C) and it’s a grueling weekend of coding for nearly 30 hours over the weekend. When you finish you get to present to the charity and all of the other groups what you have completed. From the site: Coders For Charities is a 3-day charity event that pairs charities and local software developers. Charities often do not have the funds to implement a new website or intranet or database solution. Software developers often do not volunteer for charities because their skills do not apply. This event is the perfect marriage of these two needs; software developers volunteering their time to help charities better serve their community though the latest technology! The actual event was lined with multiple charities and about 50 developers, designers, business analysts, etc, each working with a different charity to come up with a solution that they could implement in less than 3 days. C4C provided a place and food for us so that we wouldn’t have to leave much during the time we had to implement our solution. They also provided games like Rock Band so we could get away and clear our minds for a few moments if necessary. I don’t think we made it down there to play, but the food and drinks were a huge help for us. The charity we we picked was Harvest Home. They had a need for an online intranet site where they could track membership and gardening. Over the next few days we worked on a site we could give them. Below is a screen shot with private data marked out. It was an awesome and humbling experience to be able to give back to a charity and I’m happy I was a part of it. I would definitely do it again. How often do we get to use our abilities to volunteer our time to a charity?

    Read the article

  • links for 2011-02-28

    - by Bob Rhubart
    Apache Tuscany : SCA Java 2.x Releases (tags: ping.fm) Richard Veryard on Architecture: Modernism and Enterprise Architecture "Underlying conventional enterprise architecture theory and practice are some implicit assumptions that could be loosely characterized as modernist. Several people are offering more or less radical departures from conventional enterprise architecture..." - Richard Veryard (tags: ping.fm entarch) Java / Oracle SOA blog: Building an asynchronous web service with OSB "A few weeks ago I made a blogpost over how you can build an asynchronous web service with JAX-WS. In this blogpost I will do the same in the Oracle Service Bus." - Oracle ACE Edwin Biemond (tags: oracle otn oracleace servicebus esb osb webservices soa) Enterprise Software Development with Java: GlassFish 3.1 arrived! Yes sir, we do cluster now! "GlassFish 3.1 is finally there. As promised by Oracle back in March last year! And it is an exciting release. It brings back all the clustering and high availability support we were missing since 2.x into the Java EE 6 world." - Oracle ACE Director Markus Eisele (tags: oracle otn oracleace glassfish)

    Read the article

  • Monitoring outgoing bandwidth of application

    - by jnolte
    I currently have a VPS that is consuming a ton of outgoing bandwidth and I am trying to drill down to where this may be coming from. Does anyone know of a logical way to go about finding out which pages on the site are consuming the most outgoing data. We have done a ton of front-end optimizations to the site and our google page speed rankings ar 85% so I feel we have done a pretty great job at optimizing the site for speed. Can someone lend some insight on how they have made similar optimizations? Application / Server Stack LEMP Running Varnish Cache / PHP5-FPM WordPress running w3 Total Cache Ubuntu 12.04 LTS

    Read the article

< Previous Page | 239 240 241 242 243 244 245 246 247 248 249 250  | Next Page >