Search Results

Search found 19017 results on 761 pages for 'purchase order'.

Page 385/761 | < Previous Page | 381 382 383 384 385 386 387 388 389 390 391 392  | Next Page >

  • Redirect input from one terminal to another

    - by Niki Yoshiuchi
    I have sshed into a linux box and I'm using dvtm and bash (although I have also tried this with Gnu screen and bash). I have two terminals, current /dev/pts/29 and /dev/pts/130. I want to redirect the input from one to the other. From what I understand, in /dev/pts/130 I can type: cat </dev/pts/29 And then when I type in /dev/pts/29 the characters I type should show up in /dev/pts/130. However what ends up happening is that every other character I type gets redirected. For example, if I type "hello" I get this: /dev/pts/29 | /dev/pts/130 $ | $ cat </dev/pts/29 $ el | hlo This is really frustrating as I need to do this in order to redirect the io of a process running in gdb (I've tried both run /dev/pts/# and set inferior-tty /dev/pts/# and both resulted in the aforementioned behavior). Am I doing something wrong, or is this a bug in bash/screen/dvtm?

    Read the article

  • GitHub: Are there external tools for managing issues list vs. project backlog

    - by DXM
    Recently I posted one of my the projects1 on GitHub and as I was exploring capabilities of the site, I noticed they have a rather decent issue tracking section. I want to use that section as a) other people can report bugs if they'd like and b) other people can see which bugs I'm aware of. However, as others have noted, issues list cannot be prioritized in order to create a project backlog. For now my backlog has been a text file, but I'd like to be able to have it integrated so the same information isn't maintained in different places. Having a fully ordered list, which is something we also practice at work, has been very useful as I can open one file, start with line 1 and fire off 2 or 3 items in one sitting without having to go back to a full issues/stories bucket. GitHub doesn't offer this. What GitHub does offer is a very nice and clean API so issues can easily be exported into anything else. I've searched to see if there are other websites (like Trello) that integrate with GitHub issues, but did not find anything. Does anyone know of such a product, service or offline tool? Those that use GitHub, what is your experience in managing backlog? I kinda hate the idea of manually managing two disconnected lists like some people seem to be doing with Wiki project pages. 1 - are shameless plugs allowed no this site? Searched but didn't find a definite answer. If it's bad practice, STOP and don't read further As a developer I got sick and tired of navigating to same set of folders 30 times a day, so I wrote a little, auto-collapsible utility that gets stuck to the desktop and allows easy access to the folders you constantly use.

    Read the article

  • apache proxypass to webmin

    - by Ricardo
    I have a problem with apache2 webmin redirect. My ProxyPass is: ProxyRequests Off ProxyPreserveHost On SSLProxyEngine On ProxyPass /admin/webmin/ https://localhost:10000/ ProxyHTMLURLMap https://localhost:10000 /admin/webmin <Location /admin/webmin/> ProxyHTMLExtended On SetOutputFilter proxy-html ProxyPassReverse https://localhost:10000/ ProxyPassReverse https://xxxxxxxxxxxxxxxxxxxx.amazonaws.com:10000/ Order allow,deny Allow from all </Location> When I connect using https://xxxxxxxxxxxxxxxxxxxx.amazonaws.com:10000/ there is no problem. But when I connect use https://xxxxxxxxxxxxxxxxxxxx.amazonaws.com/admin/webmin the page lost css and after login show me the error: The requested URL /session_login.cgi was not found on this server. I think is an error with my ProxyPass but I don´t know what is.

    Read the article

  • Does anybody wish to help a poor researcher publish a paper?

    - by Mihai Todor
    I don't know if this is a good place to beg for help, but here it goes: basically, I need to run a secure recommender system simulation (C++ console application) in order to meet tonight's deadline, and the faculty's server grid decided to go offline. I could really use something like 10+ (actually, about 16 would be required to meet the deadline) virtual instances of some Linux that has GMP installed... Ideally, they should all have the same specs, because a part of the simulation will represent performance benchmarks. If my question is inappropriate in any way, I kindly ask the administrators to remove it.

    Read the article

  • internal compiler error

    - by hyperboreean
    I am getting this message: internal compiler error: Segmentation fault Please submit a full bug report, with preprocessed source if appropriate. See <file:///usr/share/doc/gcc-4.4/README.Bugs> for instructions. for every compilation that takes longer (ie: linux kernel, kde sources etc.) I've tried other OS (at that moment I was on Fedora 12, now on Debian; was a Suse also) and it didn't work. I've tried replacing my hard disk, since it needed an upgrade either ways - that didn't work either. I assumed that it's the RAM fault - tested them with memtest and it says they are fine. Does anyone know what else I can do in order to figure out where the problem is?

    Read the article

  • How to make Unity 3D work with Bumblebee using the Intel chipset

    - by EboMike
    I have a Sony VAIO S laptop with the dreaded Optimus and finally managed to get Bumblebee to work fully on Ubuntu 12.04 so that I can utilize both the hardware acceleration of the Intel chipset as well as the Nvidia one via optirun and/or bumble-app-settings. However, the desktop effects don't work. But they should, I vaguely remember that they worked for a while before I had Bumblebee installed. This is what I get with the support test: :~$ /usr/lib/nux/unity_support_test -p Xlib: extension "NV-GLX" missing on display ":0". OpenGL vendor string: Tungsten Graphics, Inc OpenGL renderer string: Mesa DRI Intel(R) Ivybridge Mobile OpenGL version string: 1.4 (2.1 Mesa 8.0.2) Not software rendered: yes Not blacklisted: yes GLX fbconfig: yes GLX texture from pixmap: yes GL npot or rect textures: yes GL vertex program: yes GL fragment program: yes GL vertex buffer object: no GL framebuffer object: yes GL version is 1.4+: yes Unity 3D supported: no First of all, I kind of doubt that the chipset doesn't support VBOs (essentially a standard feature in GL). Neither Xorg.0.log nor Xorg.8.log show any particular errors. As for the Nvidia drivers: In order to get them to work, I had to install the 304.22 drivers (older ones wouldn't work). They clobbered libglx.so, so I reinstated the xserver-xorg-core libglx.so in its original place, moved Nvidia's libglx.so to an nvidia-specific folder and specified that folder in the bumblebee.config. That seems to work and shouldn't cause the problem I see here. For fun, I tried to use the Nvidia chipset for Unity, but that didn't fly either: ~$ optirun /usr/lib/nux/unity_support_test -p OpenGL vendor string: NVIDIA Corporation OpenGL renderer string: GeForce GT 640M LE/PCIe/SSE2 OpenGL version string: 4.2.0 NVIDIA 304.22 Not software rendered: yes Not blacklisted: yes GLX fbconfig: yes GLX texture from pixmap: no GL npot or rect textures: yes GL vertex program: yes GL fragment program: yes GL vertex buffer object: yes GL framebuffer object: yes GL version is 1.4+: yes Unity 3D supported: no

    Read the article

  • How to correctly export UV coordinates from Blender

    - by KlashnikovKid
    Alright, so I'm just now getting around to texturing some assets. After much trial and error I feel I'm pretty good at UV unwrapping now and my work looks good in Blender. However, either I'm using the UV data incorrectly (I really doubt it) or Blender doesn't seem to export the correct UV coordinates into the obj file because the texture is mapped differently in my game engine. And in Blender I've played with the texture panel and it's mapping options and have noticed it doesn't appear to affect the exported obj file's uv coordinates. So I guess my question is, is there something I need to do prior to exporting in order to bake the correct UV coordinates into the obj file? Or something else that needs to be done to massage the texture coordinates for sampling. Or any thoughts at all of what could be going wrong? (Also here is a screen shot of my diffused texture in blender and the game engine. As you can see in the image, I have the same problem with a simple test cube not getting correct uv's either) http://www.digitalinception.net/blenderSS.png http://www.digitalinception.net/gameSS.png

    Read the article

  • How to send connection type (SSH|Telnet) info in Radius Access Requests on Cisco router?

    - by Gianni Costanzi
    I've configured the following on a cisco router: aaa authentication login default group radius local ! radius-server host x.x.x.x auth-port 1012 acct-port 1013 radius-server host y.y.y.y auth-port 1012 acct-port 1013 radius-server retransmit 1 radius-server timeout 3 radius-server key 7 xxxxxxxxx I'd like to be able to specify some radius options in order to add information about the type of connection for which a user is being authenticated, i.e. I'd like the radius server to receive in the Cisco Router's Radius Access Request information about the connection being SSH or Telnet.. I'd like to find something that automatically adds this info in the access request, without specific configurations on VTY lines dedicated to SSH and to Telnet. Any idea about that?

    Read the article

  • Problems installing Ubuntu on a vaio with SSD, GRUB installation failure

    - by Alberto
    I have installed and used Ubuntu in several computers. But now I have a problem that I don't know how to solve. I have a Vaio (Product name: vpcz13c5e), it has a SSD 128gb. I decided to install Ubuntu (12.04, but I have tried older versions as well). Firstly, I tested with live USB, and everything was fine, so I decided go for the complete installation. Then everything went as follows: I chose to use the whole disk (first option, formatting everything). I got a message Executing 'grub-install' /deb/sdb failed. This is a fatal error After clicking ok I got another window with 3 options: the first offers different devices to install the bootloader on (I tried all of them and none works). Second option: Continue without a bootloader. In that case I got You will need to manually install a bootloader in order to start Ubuntu The third option is Cancel the installation. So, I chose Continue without a bootloader. Then I restart the computer (with the Live cd) and in a terminal type sudo fdisk -l but I obtain fdisk: unable to seek on /dev/sda: Invalid argument What can I do? any help will be appreciated.

    Read the article

  • Script/tool to import series of snapshots, each being a new revision, into Subversion, populating source tree?

    - by Rob
    I've developed code locally and taken a fairly regular snapshot whenever I reach a significant point in development, e.g. a working build. So I have a long-ish list of about 40 folders, each folder being a snapshot e.g. in ascending date YYYYMMDD order, e.g.:- 20100523 20100614 20100721 20100722 20100809 20100901 20101001 20101003 20101104 20101119 20101203 20101218 20110102 I'm looking for a script to import each of these snapshots as a new subversion revision to the source tree. The end result being that the HEAD revision is the same as the last snapshot, and other revisions are as numbered. Some other requirements: that the HEAD revision is not cumulative of the previous snapshots, i.e., files that appeared in older snapshots but which don't appear in later ones (e.g. due to refactoring etc.) should not appear in the HEAD revision. meanwhile, there should be continuity between files that do persist between snapshots. Subversion should know that there are previous versions of these files and not treat them as brand new files within each revision. Some background about my aim: I need to formally revision control this work rather than keep local private snapshot copies. I plan to release this work as open source, so version controlling would be highly recommended I am evaluating some of the current popular version control systems (Subversion and GIT) BUT I definitely need a working solution in Subversion. I'm not looking to be persuaded to use one particular tool, I need a solution for each tool I am considering as I would also like a solution in GIT (I will post an answer separately for GIT so separate camps of folks who have expertise in GIT and Subversion will be able to give focused answers on one or the other). The same question but for GIT: Script/tool to import series of snapshots, each being a new edition, into GIT, populating source tree? An outline answer for Subversion in stackoverflow.com but not enough specifics about the script: what commands to use, code to check valid scenarios if necessary - i.e. a working script basically. http://stackoverflow.com/questions/2203818/is-there-anyway-to-import-xcode-snapshots-into-a-new-svn-repository

    Read the article

  • Real time mirroring between two sql server databases

    - by Matt Thrower
    Hi, I'm a c# programmer, not a DBA and I've had the (mis)fortune to be handed a database admin task. So please bear this in mind when answering this question. What I've been asked to do is to create a real time two-way mirror between two databases with a 10 Megabit connection between them. So when either changes it updates the other. This is not a standard data mirroring/failover task where one DB is the master and the other is a backup - both are live and each needs to instantly reflect changes made to the other. In my head this sounds like a tall order, one which may even be impossible - after all in a rapidly changing environment with lots of users this is going to be massively resource intensive and create locks and queues of jobs all over the place. Is it possible? If so, can anyone either give me some basic instructions and/or point me at some places to start my reading and research? Cheers, Matt

    Read the article

  • An unexpected pleasure from Windows 8

    - by eddraper
    This post is certainly on the more nuanced side of all the goodness that is Windows 8, but it’s about something that’s really changed my PC usage experience for the better. Besides being a geek and the enjoying all the techno-thrills and chills that go along with sitting in front of a keyboard all day, I really love the forest.  Trees have always been special to me.  The feeling of being in the forest with all the sounds and ambiance, the broken light, the fragrance of the air… it’s paradise to me. As I can’t get there often, due to work, and quite often the heat here in Texas, I’ve found something that can at least partially fill the gap…  When you install Windows 8, you’ll have an app called “Naturespace” from http://www.naturespace.com/ .  It boasts a number of predefined loops in what they call “holographic audio.”  They’re essentially high-tech 3D sound fields recorded in natural environments. After checking them out, I really liked the sound of the “Daybreak” selection: A great benefit is that you don’t have to be in Metro/Modern/Windows App Store mode, in order to keep the sound playing.  To start the day, I click on Daybreak, start it, then go back to the desktop and fire up VS, Chrome, etc. As I work and play, I’m surrounded by this delightful background ambiance which relaxes me and puts my mind at ease. Give it a try.  I think you’ll like it.  And no, you don’t need ear buds or headphones to get the benefit.

    Read the article

  • Deleting a row from self-referencing table

    - by Jake Rutherford
    Came across this the other day and thought “this would be a great interview question!” I’d created a table with a self-referencing foreign key. The application was calling a stored procedure I’d created to delete a row which caused but of course…a foreign key exception. You may say “why not just use a the cascade delete option?” Good question, easy answer. With a typical foreign key relationship between different tables which would work. However, even SQL Server cannot do a cascade delete of a row on a table with self-referencing foreign key. So, what do you do?…… In my case I re-wrote the stored procedure to take advantage of recursion:   -- recursively deletes a Foo ALTER PROCEDURE [dbo].[usp_DeleteFoo]      @ID int     ,@Debug bit = 0    AS     SET NOCOUNT ON;     BEGIN TRANSACTION     BEGIN TRY         DECLARE @ChildFoos TABLE         (             ID int         )                 DECLARE @ChildFooID int                        INSERT INTO @ChildFoos         SELECT ID FROM Foo WHERE ParentFooID = @ID                 WHILE EXISTS (SELECT ID FROM @ChildFoos)         BEGIN             SELECT TOP 1                 @ChildFooID = ID             FROM                 @ChildFoos                             DELETE FROM @ChildFoos WHERE ID = @ChildFooID                         EXEC usp_DeleteFoo @ChildFooID         END                                    DELETE FROM dbo.[Foo]         WHERE [ID] = @ID                 IF @Debug = 1 PRINT 'DEBUG:usp_DeleteFoo, deleted - ID: ' + CONVERT(VARCHAR, @ID)         COMMIT TRANSACTION     END TRY     BEGIN CATCH         ROLLBACK TRANSACTION         DECLARE @ErrorMessage VARCHAR(4000), @ErrorSeverity INT, @ErrorState INT         SELECT @ErrorMessage = ERROR_MESSAGE(), @ErrorSeverity = ERROR_SEVERITY(), @ErrorState = ERROR_STATE()         IF @ErrorState <= 0 SET @ErrorState = 1         INSERT INTO ErrorLog(ErrorNumber,ErrorSeverity,ErrorState,ErrorProcedure,ErrorLine,ErrorMessage)         VALUES(ERROR_NUMBER(), @ErrorSeverity, @ErrorState, ERROR_PROCEDURE(), ERROR_LINE(), @ErrorMessage)         RAISERROR (@ErrorMessage, @ErrorSeverity, @ErrorState)     END CATCH   This procedure will first determine any rows which have the row we wish to delete as it’s parent. It then simply iterates each child row calling the procedure recursively in order to delete all ancestors before eventually deleting the row we wish to delete.

    Read the article

  • Moving case sensitive Linux files via Windows

    - by sunwukung
    Hi the company I work for is currently trying to move a Magento installation from one server to another - however, the product images are saved in folders alphabetically indexed folders - but with an added twist, some of the letters are the same but have a different case - i.e. a, A, b, C, D, e, E, f, F, G, h, I That being the case, when we try to drag those files down from FTP in order to move them, Windows does not honour the case sensitive distinction and we are losing several image folders. Is there a simple workaround for this issue? Any help is greatly appreciated. regards SWK

    Read the article

  • snow leopard: how can i monitor loading of specific extensions?

    - by ufk
    Hiya. I'm installing VoodooHDA extension in order for Intel 82801I (ICH9 Family) HD Audio 8086:293E sound card to operate. The sound card does not work but i have no idea how to debug the issue. How can I check if the module was loaded properly? how can i make sure that there where no conflicts? how can i make sure that there were no dependencies issues with another module? Using snow leopard 64 bit. thanks!

    Read the article

  • MySQL open files limit

    - by Brian
    This question is similar to set open_files_limit, but there was no good answer. I need to increase my table_open_cache, but first I need to increase the open_files_limit. I set the option in /etc/mysql/my.cnf: open-files-limit = 8192 This worked fine in my previous install (Ubuntu 8.04), but now in Ubuntu 10.04, when I start the server up, open_files_limit is reported to be 1710. That seems like a pretty random number for the limit to be clipped to. Anyway, I tried getting around it by adding a line like this in /etc/security/limits.conf: mysql hard nofile 8192 I also tried adding this to the pre-start script in mysql's upstart config (/etc/init/mysql.conf): ulimit -n 8192 Obviously neither of those things worked. So where is the hoop that has been added between Ubuntu 8.04 and 10.04 through which I must jump in order to actually increase the open files limit?

    Read the article

  • Why maven so slow compared to automake?

    - by ???'Lenik
    I have a Maven project consists of around 100 modules. I have reason to decompose the project to so many modules, and I don't think I should merge them in order to speed up the build process. I have read a lot of projects by other people, e.g., the Maven project itself, and Apache Archiva, and Hudson project, they all consists of a lot of modules, nearly 100 maybe, more or less. The problem is, to build them all need so much time, 3 hours for the first time build (this is acceptable because a lot of artifacts to download), and 15 minutes for the second build (this is not acceptable). For automake, things are similarly, the first time you need to configure the project, to prepare the magical config.h file, it's far more complex then what maven does. But it's still fast, maybe 10 seconds on my Debian box. After then, make install requires maybe 10 minutes for the first time build. However, when everything get prepared, the .o object files are generated, they don't have to be rebuild at all for the second time build. (In Maven, everything rebuild at everytime.) I'm very wondering, how guys working for Maven projects can bare this long time for each build, I'm just can't sit down calmly during each time Maven build, it took too long time, really.

    Read the article

  • Yahoo search: different results shown in two identical searches

    - by Marco Demaio
    Hello,simple question: searching on http://www.yahoo.it for villa matrimonio bologna I noticed Yahoo shows different results. You need to retry few times to get this done maybe exiting the browser and openeing it again, or maybe searching once and then clearing browser cookies and then search again (it's even easier to test if you use two different browsers at the same time to search for the same phrase). Anyway in order to reproduce this easily I write down here the query shown in the address bar after the search, so you can just click on these to see the results shown by entering these query: http://it.search.yahoo.com/search;_ylt=AirvLYKvBMPP_6MpAmONN14brK5_?vc=&p=villa+matrimonio+bologna&toggle=1&cop=mss&ei=UTF-8&fr=yfp-t-709 http://it.search.yahoo.com/search;_ylt=AirvLYKvBMPP_6MpAmONN14brK5_?vc=&p=villa+matrimonio+bologna&toggle=1&cop=mss&ei=UTF-8&fr=sfp Note the last parameter fr is different, but it's Yahoo that set it (not me), I don't even know what it means. You can see in the search box that the searched phrase is IDENTICAL in both cases. So why Yahoo is giving out different results on same search phrase? I used the same browser and performed the test in few minutes by simply trying more than once. You may also notice that the number of results returned (written on the left side of the page) is different, for the 1st search it returns 274K results, for the 2nd one 5.38M results. Actually you might think that this is just an error on Yahoo, but it's almost 1 year that while looking once in a while at some websites to see how they are ranking on Yahoo and also Google, I noticed that two searches on the same phrase show up different results even on the same day after few minutes/hours. I couldn't reproduce this behaviour also on Google so I can not say for sure, but since it seems to me it happened sometimes I was wondering if anyone of you noticed it too. Do you know if this is the normal behaviour of search engines? Because if it's normal (and it's just me that noticed it only now) I wonder how do you understand how well a site is ranking on a search engine, you could even see one of your customer's website ranking differently compared to what your customer sees on his PC.

    Read the article

  • Requesting quality analysis test cases up front of implementation/change

    - by arin
    Recently I have been assigned to work on a major requirement that falls between a change request and an improvement. The previous implementation was done (badly) by a senior developer that left the company and did so without leaving a trace of documentation. Here were my initial steps to approach this problem: Considering that the release date was fast approaching and there was no time for slip-ups, I initially asked if the requirement was a "must have". Since the requirement helped the product significantly in terms of usability, the answer was "If possible, yes". Knowing the wide-spread use and affects of this requirement, had it come to a point where the requirement could not be finished prior to release, I asked if it would be a viable option to thrash the current state and revert back to the state prior to the ex-senior implementation. The answer was "Most likely: no". Understanding that the requirement was coming from the higher management, and due to the complexity of it, I asked all usability test cases to be written prior to the implementation (by QA) and given to me, to aid me in the comprehension of this task. This was a big no-no for the folks at the management as they failed to understand this approach. Knowing that I had to insist on my request and the responsibility of this requirement, I insisted and have fallen out of favor with some of the folks, leaving me in a state of "baffledness". Basically, I was trying a test-driven approach to a high-risk, high-complexity and must-have requirement and trying to be safe rather than sorry. Is this approach wrong or have I approached it incorrectly? P.S.: The change request/improvement was cancelled and the implementation was reverted back to the prior state due to the complexity of the problem and lack of time. This only happened after a 2 hour long meeting with other seniors in order to convince the aforementioned folks.

    Read the article

  • Show USB drives in launcher, but not mounted internal partitions

    - by Gabriel
    Well the title pretty much says it all. I have partitions that appear in the launcher when the system mounts them, just like when a USB key is plugged in. I do not want these mounted internal hard disc partitions to show as icons in the launcher, but I do want my external USB to show there when I plug it in. I've tried MyUnity - it has only an option to not show/hide all mounted devices, which is not what I want. Can this be done? From /proc/mounts (in order seen in screenshot): /dev/sdb1 /media/CEDD-DE31 vfat rw,nosuid,nodev,relatime,uid=1000,gid=1000,fmask=0022,dmask=0077,codepage=cp437,iocharset=iso8859-1,shortname=mixed,showexec,utf8,flush,errors=remount-ro 0 0 /dev/sda3 /media/A423-E0E8 vfat rw,nosuid,nodev,relatime,uid=1000,gid=1000,fmask=0022,dmask=0077,codepage=cp437,iocharset=iso8859-1,shortname=mixed,showexec,utf8,flush,errors=remount-ro 0 0 /dev/sda5 /media/586C25656C253EDE fuseblk rw,nosuid,nodev,relatime,user_id=0,group_id=0,default_permissions,allow_other,blksize=4096 0 0 /dev/sda6 /home/greg/80gb ext4 rw,relatime,user_xattr,barrier=1,data=ordered 0 0 Other items from /proc/mounts not appearing in Unity launcher: /dev/sda1 /boot/efi vfat rw,relatime,fmask=0022,dmask=0022,codepage=cp437,iocharset=iso8859-1,shortname=mixed,errors=remount-ro 0 0 /dev/sda9 /mnt/backup ext4 rw,relatime,user_xattr,barrier=1,data=ordered 0 0

    Read the article

  • Clone Windows 7 to bootable USB disk

    - by Geziefer
    I saw some posts here having a similar topic, but I haven't quite found what I am looking for. All I need to do is transfer my Windows 7 (Professional, 64bit) which is installed on an internal disk on my Dell Latitude laptop to an external USB disk in order to have my original system available for booting while installing a new system on the internal disk, which is essential since I need a working project environment for work in case the new system takes some days until fully completed. I already tried a disk clone with Acronis and Clonezilla, but in the 1st case it didn't even booted the clone and in the 2nd case it booted, but stopped with a bluescreen (and rebooted too fast to be able to see any error code). So has someone done successfully what I am trying to do and can help me out here?

    Read the article

  • Permission forbidden on localhost with apache2

    - by N Alex
    Here is what I am trying to do. I tried to add another folder to apache and I get the following error when trying to acces testing/index.html. The idea is that I would like to have for every customer a folder like /home/neagoe/Work/InterWebs/Projects/[PROJECT NAME]/CustomerProjects/website/dist. Forbidden You don't have permission to access /index.html on this server. Apache/2.2.22 (Ubuntu) Server at testing Port 80 Here are the steps that I followed: Step1: sudo chmod a+x /home/neagoe/Work/InterWebs/Projects/testing/CustomerProjects/website/dist Step2: sudo chown -R www-data:www-data /home/neagoe/Work/InterWebs/Projects/testing/CustomerProjects/website/dist sudo chmod -R 775 /home/neagoe/Work/InterWebs/Projects/testing/CustomerProjects/website/dist Step3: sudo adduser $USER www-data Step4: sudo a2enmod userdir Step5: sudo cp /etc/apache/sites-available/default /etc/apache/sites-available/testing I edited the file /etc/apache/sites-available/testing so it looks like this: <VirtualHost *:80> ServerAdmin webmaster@localhost ServerName testing DocumentRoot /home/neagoe/Work/InterWebs/Projects/testing/CustomerProjects/website/dist <Directory /> Options FollowSymLinks AllowOverride None </Directory> <Directory /home/neagoe/Work/InterWebs/Projects/testing/CustomerProjects/website/dist/ > Options Indexes FollowSymLinks MultiViews AllowOverride All Order allow,deny allow from all </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> ErrorLog ${APACHE_LOG_DIR}/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog ${APACHE_LOG_DIR}/access.log combined </VirtualHost> Step6: I edited hosts ("/etc/hosts") so it looks like this: 127.0.0.1 localhost 127.0.0.1 testing # The following lines are desirable for IPv6 capable hosts ::1 ip6-localhost ip6-loopback fe00::0 ip6-localnet ff00::0 ip6-mcastprefix ff02::1 ip6-allnodes ff02::2 ip6-allrouters Step7: sudo a2ensite testing sudo service apache2 restart I searched for about 2 hours on the internet but I can't figure out what went wrong. All the pages that I found following the same steps as described above. I know there are similar questions here on the internet, but the answer is to change permission to the directory which I did on Step2. I am sorry if this is really a duplicate but I could't find the right answer. Thank you! PS. I asked this also on AskUbuntu but didn't get any answers so I'm trying my luck here. Edit: There isn't much on the error log or the access log. On the access.log: ::1 - - [10/Aug/2013:11:23:28 +0300] "OPTIONS * HTTP/1.0" 200 126 "-" "Apache/2.2.22 (Ubuntu) (internal dummy connection)" ::1 - - [10/Aug/2013:11:23:29 +0300] "OPTIONS * HTTP/1.0" 200 126 "-" "Apache/2.2.22 (Ubuntu) (internal dummy connection)" ::1 - - [10/Aug/2013:11:23:31 +0300] "OPTIONS * HTTP/1.0" 200 126 "-" "Apache/2.2.22 (Ubuntu) (internal dummy connection)" ::1 - - [10/Aug/2013:11:23:32 +0300] "OPTIONS * HTTP/1.0" 200 126 "-" "Apache/2.2.22 (Ubuntu) (internal dummy connection)" ::1 - - [10/Aug/2013:11:23:33 +0300] "OPTIONS * HTTP/1.0" 200 126 "-" "Apache/2.2.22 (Ubuntu) (internal dummy connection)" ::1 - - [10/Aug/2013:11:23:34 +0300] "OPTIONS * HTTP/1.0" 200 126 "-" "Apache/2.2.22 (Ubuntu) (internal dummy connection)" ::1 - - [10/Aug/2013:11:23:35 +0300] "OPTIONS * HTTP/1.0" 200 126 "-" "Apache/2.2.22 (Ubuntu) (internal dummy connection)" 127.0.0.1 - - [10/Aug/2013:11:23:23 +0300] "POST /wordpress-testing/wp-cron.php?doing_wp_cron=1376123003.7026669979095458984375 HTTP/1.0" 200 705 "-" "WordPress/3.6; http://localhost/wordpress-testing" ::1 - - [10/Aug/2013:11:23:36 +0300] "OPTIONS * HTTP/1.0" 200 126 "-" "Apache/2.2.22 (Ubuntu) (internal dummy connection)" ::1 - - [10/Aug/2013:11:23:37 +0300] "OPTIONS * HTTP/1.0" 200 126 "-" "Apache/2.2.22 (Ubuntu) (internal dummy connection)" ::1 - - [10/Aug/2013:11:23:38 +0300] "OPTIONS * HTTP/1.0" 200 126 "-" "Apache/2.2.22 (Ubuntu) (internal dummy connection)" 127.0.0.1 - - [10/Aug/2013:11:31:32 +0300] "GET /index.html HTTP/1.1" 200 485 "-" "Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:23.0) Gecko/20100101 Firefox/23.0" And the last line repeats for about 200 rows. On the error.log: 1. This lines repeat from time to time. PHP Warning: PHP Startup: Unable to load dynamic library '/usr/lib/php5/20100525 /msql.so' - /usr/lib/php5/20100525/msql.so: cannot open shared object file: No such file or directory in Unknown on line 0 [Sat Aug 10 13:06:42 2013] [notice] Apache/2.2.22 (Ubuntu) PHP/5.4.9-4ubuntu2.2 configured -- resuming normal operations [Sat Aug 10 13:07:36 2013] [notice] caught SIGTERM, shutting down PHP Warning: PHP Startup: Unable to load dynamic library '/usr/lib/php5/20100525/msql.so' - /usr/lib/php5/20100525/msql.so: cannot open shared object file: No such file or directory in Unknown on line 0 [Sat Aug 10 13:07:37 2013] [notice] Apache/2.2.22 (Ubuntu) PHP/5.4.9-4ubuntu2.2 configured -- resuming normal operations 2. And this is the predominant error. (hundreds of lines) [Sat Aug 10 13:07:40 2013] [error] [client 127.0.0.1] (13)Permission denied: access to /index.html denied

    Read the article

  • Mod_rewrite not working on ISPConfig 3 Server

    - by Akahadaka
    Problem I recently migrated a Drupal site from a shared hosting server to my own VM. Everything appears to work correctly, except clean urls. My VM Setup Ubuntu 10.04 LAMP ISPConfig 3 What I've tried From reading up on a number of drupal forums I've tried the following in this order Check that mod_rewrite is installed and enabled Changed PHP from FastCGI to Mod_PHP (prefer to use FastCGI or suPHP though to avoid having tmp/files folders with 777 permissions) Changed the Redirect type to L in ISPConfig Sites-domain.com-Redirect Changed /etc/apache2/sites-enabled/000-default <Directory /var/www/> Options Indexes FollowSymLinks MultiViews AllowOverride All ... </Directory> Not sure about points 3 and 4, I do want all domains to be able to use mod_rewrite out of the box. Question Have I done something wrong or am I missing a step? Ultimately I would like to use FastCGI and clean urls working on all ISPConfig 3 domains without having to make any changes to individual domain settings. Any ideas appreciated, I'll try them all.

    Read the article

  • Automatic Appointment Conflict Resolution

    - by Thomas
    I'm trying to figure out an algorithm for resolving appointment times. I currently have a naive algorithm that pushes down conflicting appointments repeatedly, until there are no more appointments. # The appointment list is always sorted on start time appointment_list = [ <Appointment: 10:00 -> 12:00>, <Appointment: 11:00 -> 12:30>, <Appointment: 13:00 -> 14:00>, <Appointment: 13:30 -> 14:30>, ] Constraints are that appointments: cannot be after 15:00 cannot be before 9:00 This is the naive algorithm for i, app in enumerate(appointment_list): for possible_conflict in appointment_list[i+1:]: if possible_conflict.start < app.end: difference = app.end - possible_conflict.start possible_conflict.end += difference possible_conflict.start += difference else: break This results in the following resolution, which obviously breaks those constraints, and the last appointment will have to be pushed to the following day. appointment_list = [ <Appointment: 10:00 -> 12:00>, <Appointment: 12:00 -> 13:30>, <Appointment: 13:30 -> 14:30>, <Appointment: 14:30 -> 15:30>, ] Obviously this is sub-optimal, It performs 3 appointment moves when the confict could have been resolved with one: if we were able to push the first appointment backwards, we could avoid moving all the subsequent appointments down. I'm thinking that there should be a sort of edit-distance approach that would calculate the least number of appointments that should be moved in order to resolve the scheduling conflict, but I can't get the a handle on the methodology. Should it be breadth-first or depth first solution search. When do I know if the solution is "good enough"?

    Read the article

  • MVC Validation with ModelState.isValid through a wizard

    - by Emmanuel TOPE
    I'm working on a small educational project on MVC 3, and I'm facing a small problem, when attempting to handle validation in my application through a wizard. I tried to get benefit from the ability of MVC3 to deliver content of a different view using the same URL, when handling an [HttpPost] method on a page. I my case,my main model's class contains about ten [Required] properties, that I would like to expose through a small wizard in 3 steps , So I want that the user may be able to enter his personal informations in the first step, then respond to some questions in the second stepp and finally receive a confirmation mail from the web application whit his credentials in the last step. I can't access the last step, because of the ModelState.isValid method that I use to handle validations, and which can't perform properly if I define some properties as [Required], but don't put them on the first view. As the replies to those questions remain in a couple of choices, I've thinked that I may use some nullable bool? for in order to avoid validation issues, but know that it's not the proper way. Are there someone who would like to help me find a way to extend my validation to those three steps ? Thanks in advance and sorry for my english, I'm not a native speaker.

    Read the article

< Previous Page | 381 382 383 384 385 386 387 388 389 390 391 392  | Next Page >