Search Results

Search found 18786 results on 752 pages for 'edit insanely'.

Page 498/752 | < Previous Page | 494 495 496 497 498 499 500 501 502 503 504 505  | Next Page >

  • CMS vs Admin Panel?

    - by Bob
    Okay, so this probably seems like an unusual, more grammar related question, but I was unsure of what to call it. If you use a software such as vBulletin or MyBB or even Blogger and you're the administrator (or other, lesser position such as moderator) of the forum, or publisher/author of the blog, you generally have access to something of an "admin panel". For example, vBulletin's admin panel looks like this and Blogger's admin panel looks something like this. While they both look different and do different things, the goal is fundamentally the same: to provide the user with a method for adding, modifying, or deleting content... to let them control and administrate their forum or blog. Also, they're both made specifically by the company for use in a specific product. Now, there's also options like Drupal. It seems to offer quite a bit more and be quite a bit more generalized. How does something like this work? If you were freelancing, would you deploy a website with Drupal, or would it be something the client might already have installed on their own server? I've never really used Drupal, only heard about it, so please let me know. Also, there seems to be other options like cPanel, a sort of global CMS that allows you to administrate over your entire website. How do those work in comparison to Drupal, or the administrative panels with vBulletin? They seem to serve related, but different purposes. Basically, what is the norm? If I'm developing a web application for a group that needs to be able to edit their website without the need to go into the code or the database (or rather, wants to act in a graphical, easy-to-use content-management/admin panel), would it also be necessary to write my own miniature admin panel? Or would I be able to send them off knowing that they have cPanel? Or could something like Drupal fill this void? Again, I'm a little new to web development, and I'm working on planning out my first, real, large website. So I need a little advice on the standards and expectations for web development - security and coding practices aside, what should I be looking for as far as usability and administration for the client (who will be running the site once I'm done creating the website)? Any extra tips would also be appreciated! Oh, and just a little bit: I'm writing the website using Ruby on the Sinatra framework (both Ruby and Sinatra are things I'm fairly comfortable with) and I'm not being paid to make the website (and I will also be a user, and one of the three administrators of the website) - it's being built for a club I'm in.

    Read the article

  • Coordinate based travel through multi-line path over elapsed time

    - by Chris
    I have implemented A* Path finding to decide the course of a sprite through multiple waypoints. I have done this for point A to point B locations but am having trouble with multiple waypoints, because on slower devices when the FPS slows and the sprite travels PAST a waypoint I am lost as to the math to switch directions at the proper place. EDIT: To clarify my path finding code is separate in a game thread, this onUpdate method lives in a sprite like class which happens in the UI thread for sprite updating. To be even more clear the path is only updated when objects block the map, at any given point the current path could change but that should not affect the design of the algorithm if I am not mistaken. I do believe all components involved are well designed and accurate, aside from this piece :- ) Here is the scenario: public void onUpdate(float pSecondsElapsed) { // this could be 4x speed, so on slow devices the travel moved between // frames could be very large. What happens with my original algorithm // is it will start actually doing circles around the next waypoint.. pSecondsElapsed *= SomeSpeedModificationValue; final int spriteCurrentX = this.getX(); final int spriteCurrentY = this.getY(); // getCoords contains a large array of the coordinates to each waypoint. // A waypoint is a destination on the map, defined by tile column/row. The // path finder converts these waypoints to X,Y coords. // // I.E: // Given a set of waypoints of 0,0 to 12,23 to 23, 0 on a 23x23 tile map, each tile // being 32x32 pixels. This would translate in the path finder to this: // -> 0,0 to 12,23 // Coord : x=16 y=16 // Coord : x=16 y=48 // Coord : x=16 y=80 // ... // Coord : x=336 y=688 // Coord : x=336 y=720 // Coord : x=368 y=720 // // -> 12,23 to 23,0 -NOTE This direction change gives me trouble specifically // Coord : x=400 y=752 // Coord : x=400 y=720 // Coord : x=400 y=688 // ... // Coord : x=688 y=16 // Coord : x=688 y=0 // Coord : x=720 y=0 // // The current update index, the index specifies the coordinate that you see above // I.E. final int[] coords = getCoords( 2 ); -> x=16 y=80 final int[] coords = getCoords( ... ); // now I have the coords, how do I detect where to set the position? The tricky part // for me is when a direction changes, how do I calculate based on the elapsed time // how far to go up the new direction... I just can't wrap my head around this. this.setPosition(newX, newY); }

    Read the article

  • Export all SSIS packages from msdb using Powershell

    - by jamiet
    Have you ever wanted to dump all the SSIS packages stored in msdb out to files? Of course you have, who wouldn’t? Right? Well, at least one person does because this was the subject of a thread (save all ssis packages to file) on the SSIS forum earlier today. Some of you may have already figured out a way of doing this but for those that haven’t here is a nifty little script that will do it for you and it uses our favourite jack-of-all tools … Powershell!!   Imagine I have the following package folder structure on my Integration Services server (i.e. in [msdb]): There are two packages in there called “20110111 Chaining Expression components” & “Package”, I want to export those two packages into a folder structure that mirrors that in [msdb]. Here is the Powershell script that will do that:   Param($SQLInstance = "localhost") #####Add all the SQL goodies (including Invoke-Sqlcmd)##### add-pssnapin sqlserverprovidersnapin100 -ErrorAction SilentlyContinue add-pssnapin sqlservercmdletsnapin100 -ErrorAction SilentlyContinue cls $Packages = Invoke-Sqlcmd -MaxCharLength 10000000 -ServerInstance $SQLInstance -Query "WITH cte AS ( SELECT cast(foldername as varchar(max)) as folderpath, folderid FROM msdb..sysssispackagefolders WHERE parentfolderid = '00000000-0000-0000-0000-000000000000' UNION ALL SELECT cast(c.folderpath + '\' + f.foldername as varchar(max)), f.folderid FROM msdb..sysssispackagefolders f INNER JOIN cte c ON c.folderid = f.parentfolderid ) SELECT c.folderpath,p.name,CAST(CAST(packagedata AS VARBINARY(MAX)) AS VARCHAR(MAX)) as pkg FROM cte c INNER JOIN msdb..sysssispackages p ON c.folderid = p.folderid WHERE c.folderpath NOT LIKE 'Data Collector%'" Foreach ($pkg in $Packages) { $pkgName = $Pkg.name $folderPath = $Pkg.folderpath $fullfolderPath = "c:\temp\$folderPath\" if(!(test-path -path $fullfolderPath)) { mkdir $fullfolderPath | Out-Null } $pkg.pkg | Out-File -Force -encoding ascii -FilePath "$fullfolderPath\$pkgName.dtsx" }   To run it simply change the “localhost” parameter of the server you want to connect to either by editing the script or passing it in when the script is executed. It will create the folder structure in C:\Temp (which you can also easily change if you so wish – just edit the script accordingly). Here’s the folder structure that it created for me: Notice how it is a mirror of the folder structure in [msdb]. Hope this is useful! @Jamiet

    Read the article

  • Failed 12.04 installation

    - by Rob Sayer
    I tried installing Ubuntu 12.04 today. Not an upgrade, a new installation. It didn't work. My computer specs: Computer: Compaq presario CQ-104CA OS: Windows 7 Home 64 bit CPU: AMD V140 BIOS: latest Graphics: amd m880g with ati mobility radeon hd 4250 Wireless: atheros ar9285 Internal HD:SATA I wasn't connected to the internet at the time ... I know of a number of people who have installed ubuntu unconnected and just updated later. It seemed to go normally until I got to the part where I chose to install dual boot linux/windows. Then, the screen went black and the following test appeared (I left out the [OK]'s): checking battery state starting crash report submission daemon stating cpu interrupts balancing daemon stopping system V runlevel compatibility starting configure network device security stopping configure network device security stopping cold plug devices stopping log initial device creation starting enable remaining boot-time encrypting devices starting configure network device security starting save udev log and update rules stopping save udev log and update rules stopping enable remaining boot-time encrypted block devices checking for running unattended-upgrades acpid: exiting speech-dispatcher disabled: edit /etc/default/speech-disorder At this point, the CD is ejected. Then nothing. If I press the return key, it boots Windows. I don't think that's what's supposed to happen. Thinking the cd media or dvd drive may have been faulty, I downloaded the .iso again and made a bootable USB stick, as per your instructions. This time there was no cryptic crash screen. It just booted windows. I can't find any log files it may have left. Thinking the amd64 version may have been the wrong one, I tried downloading the x86 version. Same thing, both from cd and usb drive. Note I downloaded both files twice. I doubt it was a corrupted d/l. This is supposed to be a simple, transparent install. I went to the time and trouble of looking up my devices and drivers re ubuntu beforehand, and was prepared to do some configuration, though I know someone who has the same wireless device and his worked righted out of the box. But I spent over 3 hours trying to install it with only the above to show for it.

    Read the article

  • Recovering an Ubuntu installation - Ubuntu eats itself after 'sudo apt-get install -f'

    - by Tony Martin
    Updater (I assume) put a no entry style alert icon on the panel which informed me that certain package dependencies were not up to snuff. Upgrades were thereafter only partial. The dialogue advised that I sudo apt-get install -f. I did this hoping that app-get would fulfil dependencies and replace corrupted files and watched it systematically remove every component of linux, both the stuff I had installed and the core ubuntu packages. I could only assume at this stage that this was in preparation for a fresh install but, of course, I know better now - if you find yourself with apt-get warning you that you are about to remove several hundred packages and asking you to type an involved confirmation string seek advice before proceeding. I digress. This was a 64 bit install of 12.04. All that is left is grub pointing to a couple of windows recovery partitions on the hard drive. Thankfully the Ext4 partition is reachable from a stick boot. EDIT: I've logged onto the machine with a 64 bit stick and can see the file structure left behind by apt-get after {ahem} fixing. My first instinct was to run install from the stick but it seemed to want to do another install rather than a repair. My question then: is there a way to recover the current installation so that if I reinstall the packages I had they will pick up the original settings? I'm particularly worried about losing email from evolution - the rest I could probably lash back together. As for the use of PPA I'm not sure what you're driving at. I generally use Ubuntu Software Centre to install software, though I have used terminal scripts to add new repositories and software successfully following guidance on various websites. The most recent change I made was a downgrade of Wine in an attempt to install and run excel2007 (a necessity, I think, as I have VBA work to do). The installer had stalled and had to be killed. I wonder if that corrupted whatever database holds a model of the package installation structure. I would also be interested to know how this disaster came about. I see people in the know recommending the sudo apt-get install -f as a fairly innocuous cure in similar circumstances. Thanks for your attention, Tony Martin p.s. Do please forgive the rant aspects of the original post. It's hard to write rationally with a large hole in the pit of your stomach.

    Read the article

  • SCCM 2007 Collections per OU

    - by VirtualizeIT
    Recently I wanted to create our SCCM collections setup as our Active Directory structure. I finally figured out how to create collections per OU of the domain. I decided to create a simple tutorial that may help other IT professionals the steps to complete this task.   1. Open the ConfigMgr and navigation to the collections. To navigate to the collections go to Site Database>Computer Management>Collections. 2. In the ‘Collections’ right-click and select New Collections. Then it will pop up a Wizards so you can enter the name of the collection and any notes that you may want to add that is associated with the collection.                       3. Next, select the database icon. In the ‘Name’ textbox enter the name of the query. I named mine ‘Query’ just for simplicity sake. After you enter the name select ‘Edit Query Statement…’ 4. Select the ‘Criteria’ tab 5. Select the icon that looks like a sun. 6. At this point you should see a dialog box like this…                     7. Next, click the ‘select’ button. 8. Under the ‘Attribute class’ scroll through until you see ’System Resource’ and for the ‘Attribute"’ scroll through you see ‘System OU Name’. It should look something like this…                 9. After that select OK. 10. In the ‘Value’ textbox enter the string that is associated with the OU in your domain. NOTE: If you don’t know your string name for your OU you can simply go to “Active Directory Users and Computers” and right-click on the OU and select properties. In the ‘object’ tab you should see the string under the ‘Canonical name of object”. That is the string that you put in the ‘Value’ text box. 11. After you enter the OU string name press OK>OK>OK>NEXT>NEXT>FINISH.   That’s it!   I hope this tutorial has help you understand how to create a collection through your OU structure.

    Read the article

  • BizTalk 2009 - SQL Server Job Configuration

    - by StuartBrierley
    Following the installation of Biztalk Server 2009 on my development laptop I used the BizTalk Server Best Practice Analyser which highlighted the fact that two of the SQL Server Agent jobs that BizTalk relies on were not running successfully.  Upon investigation it turned out that these jobs needed to be configured before they would run successfully. To configure these jobs open SQL Server Management Studio, expand SQL Server Agent > Jobs and double click on the appropriate job.  Select Steps and then edit the appropriate entries. Backup BizTalk Server (BizTalkMgmtDb) This job is comprised of three steps BackupFull, MarkAndBackupLog and ClearBackupHistory. BackupFull exec [dbo].[sp_BackupAllFull_Schedule] ‘d’ /* Frequency */,‘BTS’ /* Name */,‘<destination path>’ /* location of backup files */ The frequency here is set/left as daily The name is left as BTS You must provide a full destination path for the backup files to be stored. There are also two optional parameters: A flag that controls if the job forces a full backup if a partial backup fails A parameter to control the time of day to run the full backup; the default is midnight UTC time For example: exec [dbo].[sp_BackupAllFull_Schedule] ‘d’ /* Frequency */,‘BTS’ /* Name */,‘<destination path>’ /* location of backup files */ , 0, 22 MarkAndBackUpLog exec [dbo].[sp_MarkAll] ‘BTS’ /* Log mark name */,’<destination path>’  /*location of backup files */ You must provide a destination path for the log backups. Optionally you can also add an extra parameter that tells the procedure to use local time: exec [dbo].[sp_MarkAll] ‘BTS’ /* Log mark name */,’<destination path>’  /*location of backup files */ ,1 Clear Backup History exec [dbo].[sp_DeleteBackupHistory] @DaysToKeep=7 This will clear out the instances in the MarkLog table older than 7 days.    DTA Purge and Archive (BizTalkDTADb) This job is comprised of a single step. Archive and Purge exec dtasp_BackupAndPurgeTrackingDatabase 0, --@nLiveHours tinyint, 1, --@nLiveDays tinyint = 0, 30, --@nHardDeleteDays tinyint = 0, null, --@nvcFolder nvarchar(1024) = null, null, --@nvcValidatingServer sysname = null, 0 --@fForceBackup int = 0 Any completed instance that is older than the live days plus live hours will be deleted, as will any associated data. Any data older than the HardDeleteDays will be deleted - this means that those long running orchestration instances that would otherwise never be purged will at some point have their data cleared down while allowing the instance to continue, thus preventing the DTA databse from growing indefinitely.  This should always be greater than the soft purge window. The NVC folder is the path for the backup files, if this is null the job will not run failing with the error : DTA Purge and Archive (BizTalkDTADb) Job failed SQL Server Management Studio, job activity monitor, view history The @nvcFolder parameter cannot be null. Archive and Purge step How long you choose to keep instances in the Tracking Database is really up to you. For development I have set this up as: exec dtasp_BackupAndPurgeTrackingDatabase 0, 1, 30, ’<destination path>’, null, 0 On a live server you may want to adjust these figures: exec dtasp_BackupAndPurgeTrackingDatabase 0, 15, 20, ’<destination path>’, null, 0

    Read the article

  • What is required for a scope in an injection framework?

    - by johncarl
    Working with libraries like Seam, Guice and Spring I have become accustomed to dealing with variables within a scope. These libraries give you a handful of scopes and allow you to define your own. This is a very handy pattern for dealing with variable lifecycles and dependency injection. I have been trying to identify where scoping is the proper solution, or where another solution is more appropriate (context variable, singleton, etc). I have found that if the scope lifecycle is not well defined it is very difficult and often failure prone to manage injections in this way. I have searched on this topic but have found little discussion on the pattern. Is there some good articles discussing where to use scoping and what are required/suggested prerequisites for scoping? I interested in both reference discussion or your view on what is required or suggested for a proper scope implementation. Keep in mind that I am referring to scoping as a general idea, this includes things like globally scoped singletons, request or session scoped web variable, conversation scopes, and others. Edit: Some simple background on custom scopes: Google Guice custom scope Some definitions relevant to above: “scoping” - A set of requirements that define what objects get injected at what time. A simple example of this is Thread scope, based on a ThreadLocal. This scope would inject a variable based on what thread instantiated the class. Here's an example of this: “context variable” - A repository passed from one object to another holding relevant variables. Much like scoping this is a more brute force way of accessing variables based on the calling code. Example: methodOne(Context context){ methodTwo(context); } methodTwo(Context context){ ... //same context as method one, if called from method one } “globally scoped singleton” - Following the singleton pattern, there is one object per application instance. This applies to scopes because there is a basic lifecycle to this object: there is only one of these objects instantiated. Here's an example of a JSR330 Singleton scoped object: @Singleton public void SingletonExample{ ... } usage: public class One { @Inject SingeltonExample example1; } public class Two { @Inject SingeltonExample example2; } After instantiation: one.example1 == two.example2 //true;

    Read the article

  • Set Covering : Runtime hang\error at function call in c

    - by EnthuCrazy
    I am implementing a set covering application which uses cover function int cover(set *skill_list,set *player_list,set *covering) Suppose skill_set={a,b,c,d,e}, player_list={s1,s2,s3} then output coverin ={s1,s3} where say s1={a,b,c}, s3={d,e} and s2={b,d}. Now when I am calling this function it's hanging at run (set_cover.exe stopped working). Here is my cover function: typedef struct Spst_{ void *key; set *st; }Spst; int cover(set *skill_list,set *player_list,set *covering) { Liste *member,*max_member; Spst *subset; set *intersection; void **data; int max_size; set_init(covering); //to initialize set covering initially while(skill_list->size>0&&player_list->size>0) { max_size=0; for(member=player_list->head;member!=NULL;member=member->next) { if(set_intersection(intersection,((Spst *)(member->data))->st,skill_list)!=0) return -1; if(intersection->size>max_size) { max_member=member; max_size=intersection->size; } set_destroy(intersection); //at the end of iteration } if(max_size==0) //to check for no covering return -1; subset=(Spst *)max_member->data; //to insert max subset from play list to covering set set_inselem(covering,subset); for(member=(((Spst *)max_member->data)->st->head);member!=NULL;member=member->next) //to rem elem from skill list { data=(void **)member->data; set_remelem(skill_list,data); } set_remelem(player_list,(void **)subset); //to rem subset from set of subsets play list } if(skill_list->size>0) return -1; return 0; } Now assuming I have defined three set type sets(as stated above) and calling from main as cover(skills,subsets,covering);=> runtime hang Here Please give inputs on the missing link in this or the prerequisites for a proper call to this function type required. EDIT: Assume other functions used in cover are tested and working fine.

    Read the article

  • Webcast Replay : SANS Institute Product Review of Oracle Identity Manager

    - by B Shashikumar
    Thanks to everyone who attended the SANS Institute webinar covering the product review of Oracle Identity Manager. And a special thanks to our guest speakers from SuperValu - Phillip Black and Patrick Abreo. If you missed the webcast, you can catch a replay here  And here are the slides that were used in the webcast.  There were many questions that we could not answer as we ran out of time. We have captured some of the questions with responses below. Is Oracle Identity Analytics still offered as a separate product or is it part of Oracle Identity Manager? Oracle Identity Manager and Oracle Identity Analytics are now offered as part of Oracle Identity Governance Suite. OIA and OIM share a common UI architecture, common data model and common support for connected and disconnected resources.  When requesting new access/entitlements is there an approval process? Yes. We leverage SOA BPEL-based workflows for approvals  Are the identity self service capabilities based on Oracle ADF? Yes they are completely based on Oracle ADF  Can you give some examples of personalization and customization with Oracle Identity Manager 11gR2? With the new UI config framework we can enable different levels of UI customization. Customers now have the ability to Point & click to customize; or drag and drop customization without any need for coding. So users can easily personalize the interface of their application within the browser. For example, they can change the logo, Rearrange, hide Home Page regions; regularly searched items can be saved and re-used; Searchable & search results columns can be configured; Sorting preferences are remembered and so on. For more sophisticated customization, Customers can also edit the standard JSF within the page to alter business rules, modify page flows, page layouts and other items. Can you explain the role of sandboxes in customization? Customers can make their custom changes within a sandbox so that it doesn’t impact their production environment. They can make their changes, validate those changes, stage and then commit those changes without affecting production users. This is similar to how source code control systems like perforce work To watch a replay of the webcast, click here

    Read the article

  • Are there any drawbacks to the Major.Minor.YMDD.Build version strategy?

    - by Chu
    I'm trying to come up with a good version strategy to fit our specific needs. We've proposed settling on this and I wanted to ask the question to see if anyone's experience would suggest avoiding this or altering it in any way. Here's our proposal: Versions are released in this format: MAJOR.MINOR.YMDD.BN. Here it is broken out: MAJOR & MINOR are typical; we'll increase MINOR when we feel code and new feature sets warrants it; once every few months most likely. MAJOR will increase ~yearly. YMDD: Y will be the last digit of the current year, so "1" for 2011, "2" for 2012, etc. A non-padded month will be used to keep the number smaller (9 instead of 09 for example). DD of course is the day, padded with a zero for days under 10. BN: BN is the build number and increases by one anytime we make a change to a branch of the code represented by the build, for example: If were to make a build today, our release would be version 5.0.1707.1. I release to QA today and 3 days from now QA finds that a change broke the save functionality on a page. Instead of me changing our current development code, I'd go back to the code that I used to create version 5.0.1707.1, make the fix there, then increase the BN portion of the version and would then re-release 5.0.1707.2 back to QA. In short, anytime a change is made to a branched version that isn't the active dev branch, we'd use the original version number and increase only the BN portion (even if the change happened 3 days, 3 weeks or 3 months from the initial release of that version). Anytime we make a new release from our Active dev branch, we'd come up with a new version based on the M/D of the release using the outlined strategy. We do this once every 2-3 weeks. Are there holes or pitfalls with this? If so, what are they? Thanks EDIT To clarify one point that I didn't get out very well - Oct/Nov/Dec will be two digits, it's only the year that won't be. So 9 for Sept, 10 for Oct, 11 for Nov, etc.

    Read the article

  • How exactly is Google Webmaster Tools measuring "Site Performance"?

    - by Rémi
    I've been working for two months now on improving our response time (mainly server side) on a new forum (a brand new product on a technical point of view) we've launched in Germany a few month ago and I'm a lot surprised by the results I get. I monitor our response time using Apache logs and our own implementation of Boomerang beacon. Using my stats, I can see that our new product responds in about 680 ms where our old product was responding in about 1050 ms. On the other side, Google Webmaster Tool tells us that our pages have an average reponse time of about 1500 ms today where it was 700 three months ago with our old product. I've figured that GWT was taking client side metrics into account so I've added some measures on our Boomerang beacon and everything looks just fine. I've also ran some random pages on ySlow and Google's Page Speed and everything looks better than it was before. We event have a 82% on Google's Page Speed tool which is quite cool for a site with some ads in it :) Lately, we have signed a deal with Akamai to use two of their products : CDN for our static files (we were using another CDN before but it wasn't very effective) and RMA to improve Networks routes. We have also introduced a new agressive cache mecanism to ensure that most of the pages served to crawlers are cached by our memcache grid. After checking my metrics, it seems that this changes have improved from 650ms to about 500ms, which is good (still not great but it is definitly an improvement). But webmaster tools continues to report an increasing average response time where we see it decreasing in the same time. Have you ever had the same kind of wierd behavior on your sites while doing performance improvements ? Do you have any idea how to monitor the same thing Google does with Site Performance in Google Webmaster Tools so that we could improve our site and constantly check if it is what Google wants ? Edit 2011/07/26 : Thanks for your answers guys ! Nevertheless, I was not precise enough. The main issue we have is not with the Site Performance page but with the Crawl Stats one for now. We probably found an issue on our side with some very slow pages (around 3000 ms !!) and we are trying to fix them. I'll keep you posted as soon I'll have some infos. Thanks again !

    Read the article

  • Euler angles to Cartesian Coordinates for use with gluLookAt

    - by notrodash
    I have searched all of the internet but just couldn't find the answer. I am using LibGDX and this is part of my code that loops over and over: public void render() { GL11 gl = Gdx.gl11; float centerX = (float)Math.cos(yaw) * (float)Math.cos(pitch); float centerY = (float)Math.sin(yaw) * (float)Math.cos(pitch); float centerZ = (float)Math.sin(pitch); System.out.println(centerX+" "+centerY+" "+centerZ+" ~ "+GDXRacing.camera.position.x+" "+GDXRacing.camera.position.y+" "+GDXRacing.camera.position.z); Gdx.glu.gluLookAt(gl, GDXRacing.camera.position.x, GDXRacing.camera.position.y, GDXRacing.camera.position.z, centerX, centerY, centerZ, 0, 1, 0); if(Gdx.input.isKeyPressed(Keys.A)) { yaw--; } if(Gdx.input.isKeyPressed(Keys.D)) { yaw++; } } I might just be bad at the math, but I dont get it. Does someone have a good explanation and an idea about how to deal with this? I am trying to make a first person camera. By the way, the camera is translated by +10 on the Z axis. Currently when I run the application, this is what I get: Watch video in browser | Download video (for those who cant download the video, everything shakes in a clockwise/anticlockwise action, depending on if I increase or decrease the Yaw value) -Thank you. [edit] and with this code: public void render() { GL11 gl = Gdx.gl11; float centerX = (float)(MathUtils.cosDeg(yaw)*4); float centerY = 0; float centerZ = (float)(MathUtils.sinDeg(yaw)*4); System.out.println(centerX+" "+centerY+" "+centerZ+" ~ "+GDXRacing.camera.position.x+" "+GDXRacing.camera.position.y+" "+GDXRacing.camera.position.z); Gdx.glu.gluLookAt(gl, GDXRacing.camera.position.x, GDXRacing.camera.position.y, GDXRacing.camera.position.z, centerX, centerY, centerZ, 0, 1, 0); if(Gdx.input.isKeyPressed(Keys.A)) { yaw--; } if(Gdx.input.isKeyPressed(Keys.D)) { yaw++; } } it slowly swings from the left to the right. This approach worked for turning left and right for 2d games though. What am I doing wrong?

    Read the article

  • Using multiple diagrams per model in Entity Framework 5.0

    - by nikolaosk
    I have downloaded .Net framework 4.5 and Visual Studio 2012 since it was released to MSDN subscribers on the 15th of August.For people that do not know about that yet please have a look at Jason Zander's excellent blog post .Since then I have been investigating the many new features that have been introduced in this release.In this post I will be looking into theIn order to follow along this post you must have Visual Studio 2012 and .Net Framework 4.5 installed in your machine.Download and install VS 20120 using this link.My machine runs on Windows 8 and Visual Studio 2012 works just fine. I have also installed in my machine SQL Server 2012 developer edition. I have also downloaded and installed AdventureWorksLT2012 database.You can download this database from the codeplex website.   Before I start showcasing the demo I want to say that I strongly believe that Entity Framework is maturing really fast and now at version 5.0 can be used as your data access layer in all your .Net projects.I have posted extensively about Entity Framework in my blog.Please find all the EF related posts here. In this demo I will show you how to split an entity model into multiple diagrams using the new enhanced EF designer. We will not build an application in this demo.Sometimes our model can become too large to edit or view.In earlier versions we could only have one diagram per EDMX file.In EF 5.0 we can split the model into more diagrams.1) Launch VS 2012. Express edition will work fine.2) Create a New Project. From the available templates choose a Web Forms application  3) Add a new item in your project, an ADO.Net Entity Data Model. I have named it AdventureWorksLT.edmx.Then we will create the model from the database and click Next.Create a new connection by specifying the SQL Server instance and the database name and click OK.Then click Next in the wizard.In the next screen of the wizard select all the tables from the database and hit Finish.4) It will take a while for our .edmx diagram to be created. When I select an Entity (e.g Customer) from my diagram and right click on it,a new option appears "Move to new Diagram".Make sure you have the Model Browser window open.Have a look at the picture below 5) When we do that a new diagram is created and our new Entity is moved there.Have a look at the picture below  6) We can also right-click and include the related entities. Have a look at the picture below. 7) When we do that the related entities are copied to the new diagram.Have a look at the picture below  8) Now we can cut (CTRL+X) the entities from Diagram2 and paste them back to Diagram1.9) Finally another great enhancement of the EF 5.0 designer is that you can change colors in the various entities that make up the model.Select the entities you want to change color, then in the Properties window choose the color of your choice. Have a look at the picture below. To recap we have demonstrated how to split your entity model in multiple diagrams which comes handy in EF models that have a large number of entities in them Hope it helps!!!!

    Read the article

  • Default values - are they good or evil?

    - by Andrew
    The question about default values in general - default return function values, default parameter values, default logic for when something is missing, default logic for handling exceptions, default logic for handling the edge conditions etc. For a long time I considered default values to be a "pure evil" thing, something that "cloaks the catastrophe" and results in a very hard do find bugs. But recently I started to think about default values as some sort of a technical debt ... which is not a straight bad thing but something that could provide some "short term financing" get us to survive the project (how many of us could afford to buy a house without taking out the mortgage?). When I say a "short term" - I don't mean - "do something quickly first and do refactor it out later before it hits the production". No - I am talking about relying on a hardcoded default values in a production software. Granted - it could cause some issues, but what if it only going to cause a single trouble in a whole year. Again - I am talking about the "average" mainstream software here (not a software for a nuclear power station) - the average web site or a UI application for the accounting software, meaning that people lives are not at stake, nor millions of dollars. Again, from my experience, business users would rather live with the software which "works somehow", rather then wait for a perfect one. And the use of default values helps a lot if you develop a software in a RAD style. But again - the longest debug sessions I have spent were because of the bugs introduced by a default value which either stopped being "a default" along the way or because a small subsystem has recently been upgraded and as a result of this upgrade it does not handle the default correctly (e.g. empty list vs null, or null string vs empty string). So my question is - are the default values good or evil. And if they are a technical debt - how do measure up how much you can borrow so you can afford the repayments? Would really appreciate any input. Cheers. EDIT: If I am using the default values as a way to cut the corners during the development - and if the corners cutting results in a bugs and issues - what is the methodology to recover from these issues?

    Read the article

  • Blender to 3ds max to cal3d format

    - by Kaliber64
    There are quite a few questions on cal3d but they are old and don't apply anymore. In Blender(must be 2.49a for python script to work!!!): I have a scene with 7 meshes, 1 armature, 10 bones. I tried going to one mesh to simplify it but doesn't change anything. I found a small blend file that was used for cal3d and it exported just fine. So I tried to copy it's setup with no success. EDIT*8/13/2012 In the last week here is what I have found so far. I made the mesh in the newest blender(2.62?) and exported it to import it in the old one(2.49a). Did an animation in the old one because importing new blend files to old blenders, its just said it would lose keyframe data and all was good. And then you get the last problem of it not exporting meshes. BUT I found that meshes made in the old one export regardless. I can't find any that won't export. So if I used the old blender to remake my model I could get it to export :) At this point I found a modified release of cal3d (because the most core model variable would not initiate as I made a really small test subject in old blender instead of remaking my big one which took 4 hours.) which fixes the morph objects and adds what cal3d left off with. Under their license they have to release the modification but it has no documentation so I have to figure it out on my own. Its mostly the same. But with this lib it came with a 3ds max exporter. My question now is how do I transfer armature and mesh information from blender to 3ds max in order to export into cal3d format. Every time I try the models are see through and small and there are no bones. The formats I have tried to import are .3ds .obj(mesh only) and COLLADA. In all of them the mesh is invisible and no bones. It says the default texture is on so I should be able to see it. All the vertices are present I found a vertex highlighter so I can see those. If any of this is confusing let me know so I can clear it up. Its late .<=sleep.

    Read the article

  • Building an MVC application using QuickBooks

    - by dataintegration
    RSSBus ADO.NET Providers can be used from many tools and IDEs. In this article we show how to connect to QuickBooks from an MVC3 project using the RSSBus ADO.NET Provider for QuickBooks. Although this example uses the QuickBooks Data Provider, the same process can be used with any of our ADO.NET Providers. Creating the Model Step 1: Download and install the QuickBooks Data Provider from RSSBus. Step 2: Create a new MVC3 project in Visual Studio. Add a data model to the Models folder using the ADO.NET Entity Data Model wizard. Step 3: Create a new RSSBus QuickBooks Data Source by clicking "New Connection", specify the connection string options, and click Next. Step 4: Select all the tables and views you need, and click Finish to create the data model. Step 5: Right click on the entity diagram and select 'Add Code Generation Item'. Choose the 'ADO.NET DbContext Generator'. Creating the Controller and the Views Step 6: Add a new controller to the Controllers folder. Give it a meaningful name, such as ReceivePaymentsController. Also, make sure the template selected is 'Controller with empty read/write actions'. Before adding new methods to the Controller, create views for your model. We will add the List, Create, and Delete views. Step 7: Right click on the Views folder and go to Add -> View. Here create a new view for each: List, Create, and Delete templates. Make sure to also associate your Model with the new views. Step 10: Now that the views are ready, go back and edit the RecievePayment controller. Update your code to handle the Index, Create, and Delete methods. Sample Project We are including a sample project that shows how to use the QuickBooks Data Provider in an MVC3 application. You may download the C# project here or download the VB.NET project here. You will also need to install the QuickBooks ADO.NET Data Provider to run the demo. You can download a free trial here. To use this demo, you will also need to modify the connection string in the 'web.config'.

    Read the article

  • Installed LibreOffice 4 with ppa, how do I remove it and go back to LibreOffice 3?

    - by MMA
    EDIT This question is not at all a duplicate of How to downgrade from LibreOffice 4.0 to 3.6? The above mentioned question talks about downgrading from a specific version of LibreOffice, namely from 4.0 to 3.6. The solutions mentioned are not the ones I am looking for. They will work but I wanted a general solution without using PPA or downloading .deb files for from a higher version to a lower version. The above solutions suggest either downloading .deb files for LibreOffice 3.6 or adding repository for it. Furthermore, some of the answers put out-of-proportion~(applicable for the solution, however) stress on use of synaptic, not general command-line-solution. That made me wonder, at this very moment, if I take a fresh computer, and install Ubuntu 12.04, LibreOffice installation will work without a hitch. Then why I can not install LibreOffice in my 12.04 machine today from simple command line? This answer to my question, clarified everything. I need to use ppa-purge so that this resets all packages from a PPA to the standard versions released for my distribution. Basically it is like a way to restore my system back to the way it was before my installed packages from a PPA. This article further elaborates the idea. The above mentioned answer worked perfectly for me. Actually, this was an education for me since it taught me how do downgrade a package that was added via PPA. I had upgraded from LibreOffice 3 to LibreOffice 4 using the PPA. Now since I found that LibreOffice 4 has some issues, including handling my native language, I want to move back to LibreOffice 3. In order to accomplish this, I removed the LibreOffice config directory from my home and then purged LibreOffice from my machine. sudo apt-get purge libreoffice-* Then I removed the relevant PPA's using the sudo apt-add-repository --remove command. And then ran sudo apt-get update. Now, when I try to install LibreOffice using the command sudo apt-get install libreoffice I get an avalanche of output about unmet dependencies, something like, The following packages have unmet dependencies: libreoffice : Depends: libreoffice-core (= 1:3.5.7-0ubuntu4) but it is not going to be installed (snipped) If I dig the issue further, by using the command, sudo apt-get install libreoffice-core I get The following packages have unmet dependencies: libreoffice-core : Depends: libreoffice-common (> 1:3.5.7) but it is not going to be installed Depends: libexttextcat0 (>= 2.2-8) but it is not going to be installed Depends: ure (>= 3.5.7~) but it is not going to be installed E: Unable to correct problems, you have held broken packages. Could you please tell me how do I install LibreOffice 3 in my machine? I am using Ubuntu 12.04 LTS.

    Read the article

  • Live CD has black screen HP DV6

    - by Shaun Killingbeck
    Attempting to install/try ubuntu (11.10, 12.04) on my new laptop, using a liveCD (and tried USB). I get the purple screen (with the man/keyboard at the bottom) and after that the screen flashes bright white before going black. Ubuntu continues to load in the background, with login sound etc but the screen is off. I have tried as many different solutions as I could find including: using nomodestep, xforcevesa, i915.modeset=0, and also now i915.modeset=1 in boot options (seperately): varying consequences, but either I end up at a blinking cursor with no prompt, a command line (startx fails: no screen found), or the original blank screen again Tried booting from VirtualBox - it crashes at the same place the screen would go blank when using a CD/USB tried 11.04: I don't have this problem BUT when trying to install, I get a ubi-partman error 141 (possibly down to the three partitions that came on my laptop... not sure why HP needed there own separate partition for HP Tools...) Model: HP Pavillion DV6 6B08SA Processor: AMD Quad-Core A6-3410MX APU with Radeon HD 6545G2 Dual Graphics (1.6 GHZ 4 MB L2 cache ) Chipset: AMD RS880M Any help would be greatly appreciated. I just want to be able to partition the drive and install Ubuntu. I'm assuming the issue is graphics card related, although I have no confirmation of that. Update: Tried the ?orkarounds on https://wiki.ubuntu.com/X/Troubleshooting/BlankScreen - set gfxpayload=text changed nothing, removing splash did nothing and setting vesafb.nonsense=1 did nothing either. I'd like to be able to collect some log information somehow, but I can't get to a command line from the liveCD. tried using the latest 12.04 beta, same issue tried nomodeset without splash or quiet. get the following (tail of) output before it freezes on that screen: * Starting configure network device security [OK] * Starting configure network device [OK] [ 25.720899] ieee80211 phy0: w1_ops_config: change monitor mode: false (implement) [ 25.720923] ieee80211 phy0: w1_ops_config: change power-save mode: false (implement) * Starting restore sound card(s') mixer state(s) [fail] [ 25.721849] ieee80211 phy0: w1_ops_bss_info_changed: qos enabled: false (implement) * Stopping save kernel messages [OK] * Starting bluetooth [OK] * PulseAudio configured for per-user sessions saned disabled; edit /etc/default/saned [ 25.988016] hci_cmd_timer: hci0 command tx timeout [ 26.207225] bad LUN (0:1) [ 26.223735] bad target number (1:0) [ 26.252111] bad target number (2:0) [ 26.272170] bad target number (3:0) [ 26.300154] bad target number (4:0) [ 26.328162] bad target number (5:0) [ 26.344180] bad target number (6:0) [ 26.368142] bad target number (7:0) * Checking battery state... [OK] * Stopping System V runlevel capability [OK] Does this give any indication of the problem? the false (implement) messages also reappear when I press the power button to ask it to shutdown, followed by a [fail] status for killing remaining processes.

    Read the article

  • Customer won't decide, how to deal?

    - by Crazy Eddie
    I write software that involves the use of measured quantities, many input by the user, most displayed, that are fed into calculation models to simulate various physical thing-a-majigs. We have created a data type that allows us to associate a numeric value with a unit, we call these "quantities" (big duh). Quantities and units are unique to dimension. You can't attach kilogram to a length for example. Math on quantities does automatic unit conversion to SI and the type is dimension safe (you can't assign a weight to a pressure for example). Custom UI components have been developed that display the value and its unit and/or allow the user to edit them. Dimensionless quantities, having no units, are a single, custom case implemented within the system. There's a set of related quantities such that our target audience apparently uses them interchangeably. The quantities are used in special units that embed the conversion factors for the related quantity dimensions...in other words, when using these units converting from one to another simply involves multiplying the value by 1 to the dimensional difference. However, conversion to/from the calculation system (SI) still involves these factors. One of these related quantities is a dimensionless one that represents a ratio. I simply can't get the "customer" to recognize the necessity of distinguishing these values and their use. They've picked one and want to use it everywhere, customizing the way we deal with it in special places. In this case they've picked one of the dimensions that has a unit...BUT, they don't want there to be a unit (GRR!!!). This of course is causing us to implement these special overrides for our UI elements and such. That of course is often times forgotten and worse...after a couple months everyone forgets why it was necessary and why we're using this dimensional value, calling it the wrong thing, and disabling the unit. I could just ignore the "customer" and implement the type as the dimensionless quantity, which makes most sense. However, that leaves the team responsible for figuring it out when they've given us a formula using one of the other quantities. We have to not only figure out that it's happening, we have to decide what to do. This isn't a trivial deal. The other option is just to say to hell with it, do it the customer's way, and let it waste continued time and effort because it's just downright confusing as hell. However, I can't count the amount of times someone has said, "Why is this being done this way, it makes no sense at all," and the team goes off the deep end trying to figure it out. What would you do? Currently I'm still attempting to convince them that even if they use terms interchangeably, we at the least can't do that within the product discussion. Don't have high hopes though.

    Read the article

  • Avoiding Duplicate Content Penalties on a Corporate/Franchise website

    - by heath
    My question is really an extension of a previous question that was ported from stackoverflow and closed so I cannot edit it. The basic gist is a regional franchise company has decided to force all independent stores into one website look; they currently all have their own domains and completely different websites. After reading the helpful answers and looking over some links provided, I think my solution is to put a 301 on each franchise store site (acme-store1.com, acme-store2.com, etc) back to the main corporate site (acme.com). All of the company history, product info, etc (about 90% of the entire site) applies to all stores. However, each store should have some exclusive content such as staff, location pictures, exclusive events and promotions, etc. I originally thought that I would simply do something like acme.com/store1/staff, acme.com/store2/staff, etc for the store exclusive content and then acme.com/our-company, for example, would cover all stores. However, I now see two issues that I don't know how to solve. They want to see site stats based on what store site they came from. If a user comes from acme-store1.com, is redirected to acme.com and hits several pages, don't I need to somehow keep that original site in the new url to track each page in that user's session and show they originally came from acme-store1.com? Each store is still independently owned and is essentially still in competition with the other stores, albeit, in less competition than they are with other brands. This is important because each store would like THEIR contact info, links to their social media pages, their mailing list sign-up and customer requests on EVERY page. So if a user originally goes to acme-store1.com and is redirected to acme.com, it still should look to the user that it's all about store 1, even though 90% of the content will be exactly the same as it is in the store 2, store 3 and corporate site. For example, acme.com/our-company would have the same company history, same header/footer/navigation, BUT depending on the original site the user came from, it would display contact and links to THAT store. If someone came directly to the corporate site, it would display their contact and links (they have their own as well). I was considering that all redirects would be to store1.acme.com, store2.acme.com, etc (or acme.com/store1) and then I can dynamically add the contact info and appropriate links based on the subdomain or subfolder. But, then I have to worry about duplicate content penalties because, again, about 90% of the text in these "subdomains" are all the same. For reference, this is a PHP5 site. I've already written a compact framework utilizing templates and mod-rewrite that I've used for other sites. Is this an easy fix that I'm just not grasping? Any suggestions?

    Read the article

  • Unauthorized response from Server with API upload

    - by Ethan Shafer
    I'm writing a library in C# to help me develop a Windows application. The library uses the Ubuntu One API. I am able to authenticate and can even make requests to get the Quota (access to Account Admin API) and Volumes (so I know I have access to the Files API at least) Here's what I have as my Upload code: public static void UploadFile(string filename, string filepath) { FileStream file = File.OpenRead(filepath); byte[] bytes = new byte[file.Length]; file.Read(bytes, 0, (int)file.Length); RestClient client = UbuntuOneClients.FilesClient(); RestRequest request = UbuntuOneRequests.BaseRequest(Method.PUT); request.Resource = "/content/~/Ubuntu One/" + filename; request.AddHeader("Content-Length", bytes.Length.ToString()); request.AddParameter("body", bytes, ParameterType.RequestBody); client.ExecuteAsync(request, webResponse => UploadComplete(webResponse)); } Every time I send the request I get an "Unauthorized" response from the server. For now the "/content/~/Ubuntu One/" is hardcoded, but I checked and it is the location of my root volume. Is there anything that I'm missing? UbuntuOneClients.FilesClient() starts the url with "https://files.one.ubuntu.com" UbuntuOneRequests.BaseRequest(Method.{}) is the same requests that I use to send my Quota and Volumes requests, basically just provides all of the parameters needed to authenticate. EDIT:: Here's the BaseRequest() method: public static RestRequest BaseRequest(Method method) { RestRequest request = new RestRequest(method); request.OnBeforeDeserialization = resp => { resp.ContentType = "application/json"; }; request.AddParameter("realm", ""); request.AddParameter("oauth_version", "1.0"); request.AddParameter("oauth_nonce", Guid.NewGuid().ToString()); request.AddParameter("oauth_timestamp", DateTime.Now.ToString()); request.AddParameter("oauth_consumer_key", UbuntuOneRefreshInfo.UbuntuOneInfo.ConsumerKey); request.AddParameter("oauth_token", UbuntuOneRefreshInfo.UbuntuOneInfo.Token); request.AddParameter("oauth_signature_method", "PLAINTEXT"); request.AddParameter("oauth_signature", UbuntuOneRefreshInfo.UbuntuOneInfo.Signature); //request.AddParameter("method", method.ToString()); return request; } and the FilesClient() method: public static RestClient FilesClient() { return (new RestClient("https://files.one.ubuntu.com")); }

    Read the article

  • Thinkpad brightness steps error using FN+Home/End

    - by petermolnar
    I've met the following problem: normally my T400 (Lenovo Thinkpad) has 16 steps of brightness, and Windows utilizes it correctly. After a fresh install & minor tweaks Mint 12 (which is based on 11.10 Ubuntu) I only had 6 steps which was way to few. Listing /sys/class/backlight showed 3 entried. I removed the acpi-tools package, one of the disapperared - and I now have 10 steps! Therefore I think if I can reduce the entries to 1 I'm going to have 16 steps, since the stepping will be 1 instead of 2 (or 3). /sys/class/backlight/ intel_backlight -> ../../devices/pci0000:00/0000:00:02.0/drm/card0/card0-LVDS-1/intel_backlight thinkpad_screen -> ../../devices/virtual/backlight/thinkpad_screen The problem is that I'm unable to trace back what are the configs / daemons / kernel options triggers these two. More strangely, I discovered a strange behaviour. I monitored watch -n1 "cat /sys/class/backlight/thinkpad_screen/actual_brightness" and watch -n1 "cat /sys/class/backlight/intel_backlight/actual_brightness" while changing the brightness with FN+home/end combinations from max to min. The outcome is the following: brighness intel thinkpad --------- ----- -------- MAX 2408475 7 | 1955115 5 | 1435640 3 | 1246740 1 | 1086175 0 | 1010615 6 | 859495 4 | 689485 2 v 481695 0 MIN 217235 0 brighness intel thinkpad --------- ----- -------- MIN 217235 0 | 481695 2 | 689485 4 | 859495 6 | 1010615 7 | 1086175 1 | 1246740 3 | 1435640 5 v 1955115 7 MAX 2408475 0 When stepping from MIN to MAX, there's no difference between the last 2 steps. Also, the OSD icon (Cinnamon desktop, default theme) goes from full to min in 4 steps and from full to min once again in 4 steps. So... it seems that the intel entry is working correctly, showing correct values. The thinkpad entry however twists the things and even showing incorrect values. Does anyone have any idea how to get rid of the thinkpad entry? System data: Linux Mint 12 3.0.0-16 kernel Lenovo ThinkPad T400 Cinnamon 1.4 desktop For any additional info, please tell me what do you need. EDIT I'm sorry, I forgot to mention, I added acpi_backlight=vendor to GRUB cmdline as well, this is the result of the semi-better working than the default.

    Read the article

  • Is There a Real Advantage to Generic Repository?

    - by Sam
    Was reading through some articles on the advantages of creating Generic Repositories for a new app (example). The idea seems nice because it lets me use the same repository to do several things for several different entity types at once: IRepository repo = new EfRepository(); // Would normally pass through IOC into constructor var c1 = new Country() { Name = "United States", CountryCode = "US" }; var c2 = new Country() { Name = "Canada", CountryCode = "CA" }; var c3 = new Country() { Name = "Mexico", CountryCode = "MX" }; var p1 = new Province() { Country = c1, Name = "Alabama", Abbreviation = "AL" }; var p2 = new Province() { Country = c1, Name = "Alaska", Abbreviation = "AK" }; var p3 = new Province() { Country = c2, Name = "Alberta", Abbreviation = "AB" }; repo.Add<Country>(c1); repo.Add<Country>(c2); repo.Add<Country>(c3); repo.Add<Province>(p1); repo.Add<Province>(p2); repo.Add<Province>(p3); repo.Save(); However, the rest of the implementation of the Repository has a heavy reliance on Linq: IQueryable<T> Query(); IList<T> Find(Expression<Func<T,bool>> predicate); T Get(Expression<Func<T,bool>> predicate); T First(Expression<Func<T,bool>> predicate); //... and so on This repository pattern worked fantastic for Entity Framework, and pretty much offered a 1 to 1 mapping of the methods available on DbContext/DbSet. But given the slow uptake of Linq on other data access technologies outside of Entity Framework, what advantage does this provide over working directly with the DbContext? I attempted to write a PetaPoco version of the Repository, but PetaPoco doesn't support Linq Expressions, which makes creating a generic IRepository interface pretty much useless unless you only use it for the basic GetAll, GetById, Add, Update, Delete, and Save methods and utilize it as a base class. Then you have to create specific repositories with specialized methods to handle all the "where" clauses that I could previously pass in as a predicate. Is the Generic Repository pattern useful for anything outside of Entity Framework? If not, why would someone use it at all instead of working directly with Entity Framework? Edit: Original link doesn't reflect the pattern I was using in my sample code. Here is an (updated link).

    Read the article

  • Apache doesn't load .php files

    - by Haddex
    First, sorry for my English and asking something that it's quite answered all over the web. I've read a lot of post about this problem but I still can't find the solution. I'm a web developer who recently moved to Ubuntu from Windows 7. I had a website done (it's online and working) and I set up LAMP to keep working with it. I made a test.php file with: <?php phpinfo(); ?> and put it on /var/www/html directory, it shows all the information about the php and I was really happy: "Ok, it's all done, tomorrow I will work hard" But I placed my whole web into /var/www/html , not in a folder, the index.php is in /var/www/html but guess what: doesn't load any of my .php files, the browser just keep thinking. What I did: I rebooted Apache: /etc/init.d/apache2 restart I tried again with the test.php file and it works fine I put in /var/www/html a .html file and works fine. I looked for /etc/apache2/sites-enable/000-default.conf and it says: DocumentRoot /var/www/html I looked for /etc/apache2/mods-enabled/dir.conf and it says: DirectoryIndex index.html index.cgi index.pl index.php ... Edit* I think it's something related to phpmyadmin, like if I'm not able to connect with the database. But I got nothing on the screen when trying to load the page so...I'm not sure. I can access to the url localhost/phpmyadmin and I edited the connection.php file like this: <?php # FileName="Connection_php_mysql.htm" # Type="MYSQL" # HTTP="true" $hostname_rakstadconnection = "localhost"; $database_rakstadconnection = "rakstadclandb"; $username_rakstadconnection = "root"; $password_rakstadconnection = "admin"; $rakstadconnection = mysql_connect($hostname_rakstadconnection, $username_rakstadconnection, $password_rakstadconnection) or trigger_error(mysql_error(),E_USER_ERROR); mysql_query("SET NAMES 'utf8'"); ?> The name of the database is correct, like the user and password. http://i89.photobucket.com/albums/k220/Haddex/Capturadepantallade2014-06-09112609_zpsc45ddb72.png http://i89.photobucket.com/albums/k220/Haddex/Capturadepantallade2014-06-09112120_zps0b9e15f7.png *Edit2: could this be because it's a website that I brought to Linux from Windows? I used Dreamweaver. Edit3: I changed the # to /*/, nothing. The error.log file says: [Mon Jun 09 17:08:13.627881 2014] [:error] [pid 1517] [client 127.0.0.1:46663] PHP Warning: require_once(/var/www/html/Connections/rakstadconnection.php): failed to open stream: Permission denied in /var/www/html/index.php on line 1 [Mon Jun 09 17:08:13.627933 2014] [:error] [pid 1517] [client 127.0.0.1:46663] PHP Fatal error: require_once(): Failed opening required 'Connections/rakstadconnection.php' (include_path='.:/usr/share/php:/usr/share/pear') in /var/www/html/index.php on line 1 I'm reading error log but...should I add a linux path into a my index.php file? Don't think so. Thanks.

    Read the article

< Previous Page | 494 495 496 497 498 499 500 501 502 503 504 505  | Next Page >