Search Results

Search found 35708 results on 1429 pages for 'default copy constructor'.

Page 378/1429 | < Previous Page | 374 375 376 377 378 379 380 381 382 383 384 385  | Next Page >

  • 1080p Screen resolution problem after 10.04 to 12.04 update

    - by Ale
    I have a Samsung LCD 40" with a NVidia GeForce 6150SE nForce 430 Card. I recently upgraded from 10.04 to 12.04 and the best resolution I can get is 1360x768. I've tried the propietary drivers available on the repository kmod:nvidia_current kmod:nvidia_173_updates kmod:nvidia_current_updates kmod:nvidia_96 kmod:nvidia_96_updates kmod:nvidia_173 I've also downloaded latest from NVidia's Web, version: 295.40. But still no luck. With Nouveau driver, I can only get 1024x768. I know there is no problem with my hardware (video card, cable and monitor), I was using it perfectly on 10.04. Can anybody suggest something else I could try, to get my 1920x1080 resolution back? Thanks in advance. Here are some more information, that I got reading other similar posts on askubuntu. $ lspci | grep VGA 00:0d.0 VGA compatible controller: NVIDIA Corporation C61 [GeForce 6150SE nForce 430] (rev a2) $ xrandr xrandr: Failed to get size of gamma for output default Screen 0: minimum 320 x 240, current 1360 x 768, maximum 1360 x 768 default connected 1360x768 0 0 0mm x 0mm 1360x768 50.0 52.0* 1024x768 51.0 800x600 53.0 54.0 55.0 680x384 56.0 57.0 640x480 58.0 576x432 59.0 512x384 60.0 400x300 61.0 62.0 63.0 320x240 64.0

    Read the article

  • Errors in ~/.xsession-errors

    - by Kuberan Naganathan
    I'm getting errors in ~/.xession-errors. I'm running ubuntu 12.04 Many apps fail to run without mention of problems in the .xsession-errors file. I looked around and tried to resolve issues myself but failed so far. I have to say it's possible that the issue is related to me mounting /home on another partition. (I say possibly because stuff worked ok for a while.) Fortunately my .xsession-errors file is small enough to post here. Thanks in advance for the help: gnome-keyring-daemon: insufficient process capabilities, unsecure memory might get used gnome-keyring-daemon: insufficient process capabilities, unsecure memory might get used gnome-keyring-daemon: insufficient process capabilities, unsecure memory might get used gnome-keyring-daemon: insufficient process capabilities, unsecure memory might get used Backend : gconf Integration : true Profile : unity Adding plugins Initializing core options...done (gnome-settings-daemon:2547): color-plugin-WARNING **: failed to get edid: unable to get EDID for output (gnome-settings-daemon:2547): color-plugin-WARNING **: unable to get EDID for xrandr-default: unable to get EDID for output (gnome-settings-daemon:2547): color-plugin-WARNING **: failed to reset xrandr-default gamma tables: gamma size is zero Initializing composite options...done Initializing opengl options...done Initializing decor options...done ** Message: applet now removed from the notification area Initializing vpswitch options...done Initializing snap options...done Initializing mousepoll options...done Initializing resize options...done Initializing place options...done Initializing move options...done Initializing wall options...done Initializing grid options...done I/O warning : failed to load external entity "/home/kuberan/.compiz/session/10754cf696d335e98e13471376531156900000024960034" Initializing session options...done Initializing gnomecompat options...done Initializing animation options...done Initializing fade options...done Initializing unitymtgrabhandles options...done Initializing workarounds options...done Initializing scale options...done compiz (expo) - Warn: failed to bind image to texture Initializing expo options...done Initializing ezoom options...done ** Message: using fallback from indicator to GtkStatusIcon (compiz:2560): GConf-CRITICAL **: gconf_client_add_dir: assertion `gconf_valid_key (dirname, NULL)' failed Initializing unityshell options...done Setting Update "main_menu_key" Setting Update "run_key" Setting Update "icon_size" ** Message: moving back from GtkStatusIcon to indicator

    Read the article

  • Changing Admin Site URL (actually port) - how?

    - by TomTom
    I have a new install of the band new SharePoint 2010. I use host header identified site collections for everything. By default the admin site is on a random port. I would like to move the admin site to port 80, for the server name. As all sites have coded names (for example "intranet", "projects") this would allow administration via the server name - which is easier as external access does not have to remember the port number. How do I do this? I already changed the default URL, but the site (application) is still wrongly mapped. I dont find anything to change the IIS settings in the admin site. I possibly just miss it - so can anyone point me in the right direction?

    Read the article

  • I can't get grub menu to show up during boot

    - by wim
    After trying (and failing) to install better ATI drivers in 11.10, I've somehow lost my grub menu at boot time. The screen does change to the familiar purple colour, but instead of a list of boot options it's just blank solid colour, and then disappears quickly and boots into the default entry normally. How can I get the bootloader back? I've tried sudo update-grub and also various different combinations of resolutions and colour depths in startupmanager application with no success (640x480, 1024x768, 1600x1200, 16 bits, 8 bits, 10 second delay, 7 second delay, 2 second delay...) edit: I have already tried holding down Shift during bootup and it does not seem to change the behaviour. I get the message "GRUB Loading" in the terminal, but then the place where the grub menu normally appears I get a solid blank magenta screen for a while. Here are the contents of /etc/default/grub # If you change this file, run 'update-grub' afterwards to update # /boot/grub/grub.cfg. # For full documentation of the options in this file, see: # info -f grub -n 'Simple configuration' GRUB_DEFAULT=0 GRUB_HIDDEN_TIMEOUT=0 GRUB_HIDDEN_TIMEOUT_QUIET=true GRUB_TIMEOUT=10 GRUB_DISTRIBUTOR=`lsb_release -i -s 2> /dev/null || echo Debian` GRUB_CMDLINE_LINUX_DEFAULT="quiet splash" GRUB_CMDLINE_LINUX=" vga=798 splash" # Uncomment to enable BadRAM filtering, modify to suit your needs # This works with Linux (no patch required) and with any kernel that obtains # the memory map information from GRUB (GNU Mach, kernel of FreeBSD ...) #GRUB_BADRAM="0x01234567,0xfefefefe,0x89abcdef,0xefefefef" # Uncomment to disable graphical terminal (grub-pc only) #GRUB_TERMINAL=console # The resolution used on graphical terminal # note that you can use only modes which your graphic card supports via VBE # you can see them in real GRUB with the command `vbeinfo' #GRUB_GFXMODE=640x480 # Uncomment if you don't want GRUB to pass "root=UUID=xxx" parameter to Linux #GRUB_DISABLE_LINUX_UUID=true # Uncomment to disable generation of recovery mode menu entries #GRUB_DISABLE_RECOVERY="true" # Uncomment to get a beep at grub start #GRUB_INIT_TUNE="480 440 1"

    Read the article

  • Unidentified network: How to configure TCP/IPv4 for Win7?

    - by Zolomon
    When I try to connect to internet I keep getting the error "Unidentified network". I've tried numerous attempts at restoring access without success. IP release, flushing DNS cache, reinstalling NIC, reactivating NIC, resetting router and so on... I've read several times that it's my default gateway that's wrong. Currently I've had automatic IP/DNS configuration set without any problems, and then it stopped working for some reason. Anyone know how I specify the IP? My subnetmask is 255.255.255.0, default gateway is 192.168.0.1 but I have no idea how to determine what IP I should set. I use a D-Link DIR-655 and other computers on the network have IPs like 192.168.0.194, next is 192.168.0.197. (I'm completely lost and am trying to cool down after two weekends of debugging filled with despair.)

    Read the article

  • Unable to access internet if wireless enabled

    - by balki
    The following is my route output. eth0 is my wired network and eth1 is my wireless network. Only wired one has access to internet. If I enable wireless, I am not able to access internet, it tries to access via eth1 and I get 404 page of the wireless router. Why does eth1 have higher preference though default is eth0 (link)? [balakrishnan@mylap ~]$ route Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface default 10.26.0.1 0.0.0.0 UG 0 0 0 eth0 10.26.0.0 * 255.255.192.0 U 1 0 0 eth0 link-local * 255.255.0.0 U 1000 0 0 eth0 192.168.1.0 * 255.255.255.0 U 9 0 0 eth1

    Read the article

  • How to add a service to the S runlevel in Debian?

    - by MasterM
    I have the following script (what it does exactly is not important): #!/bin/sh -e ### BEGIN INIT INFO # Provides: watchdog_early # Required-Start: udev # Required-Stop: # Default-Start: S # Default-Stop: # X-Interactive: true # Short-Description: Start watchdog early. ### END INIT INFO # Do stuff here... I insert it into the S runlevel by invoking: insserv watchdog_early The aproriate link is created in /etc/rcS.d: S04watchdog_early -> ../init.d/watchdog_early and /etc/init.d/watchdog_early is executable (has mode 755). Despite all this, it is NOT being run at boot. Why?

    Read the article

  • Make mod_wsgi use python2.7.2 instead of python2.6?

    - by guron
    i am running Ubuntu 10.04.1 LTS and it came pre-packed with python2.6 but i need to replace it with python2.7.2. (The reason is simple, 2.7 has a lot of features backported from 3 ) i had installed python2.7.2 using ./configure make make altinstall the altinstall option installed it, without touching the system default version, to /usr/local/lib/python2.7 and placed the interpreter in /usr/local/bin/python2.7 Then to help mod_wsgi find python2.7 i added the following to /etc/apache2/sites-available/wsgisite WSGIPythonHome /usr/local i start apache and run a test wsgi app BUT i am greeted by python 2.6.5 and not Python2.7 Later i replaced the default python simlink to point to python 2.7 ln -f /usr/local/bin/python2.7 /usr/bin/python Now typing 'python' on the console opens python2.7 but somehow mod_wsgi still picks up python2.6 Next i tried, PATH=/usr/local/bin:$PATH export PATH then do a quick restart apache, but yet again its python2.6 !! Here is my $PATH /usr/local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games contents of /etc/apache2/sites-available/wsgisite WSGIPythonHome /usr/local <VirtualHost *:80> ServerName wsgitest.local DocumentRoot /home/wwwhost/pydocs/wsgi <Directory /home/wwwhost/pydocs/wsgi> Order allow,deny Allow from all </Directory> WSGIScriptAlias / /home/wwwhost/pydocs/wsgi/app.wsgi </VirtualHost> app.wsgi import sys def application(environ, start_response): status = '200 OK' output = sys.version response_headers = [('Content-type', 'text/plain'), ('Content-Length', str(len(output)))] start_response(status, response_headers) return [output] Apache error.log 'import site' failed; use -v for traceback [Sun Jun 19 00:27:21 2011] [info] mod_wsgi (pid=23235): Initializing Python. [Sun Jun 19 00:27:21 2011] [notice] Apache/2.2.14 (Ubuntu) mod_wsgi/2.8 Python/2.6.5 configured -- resuming normal operations [Sun Jun 19 00:27:21 2011] [info] Server built: Nov 18 2010 21:20:56 [Sun Jun 19 00:27:21 2011] [info] mod_wsgi (pid=23238): Attach interpreter ''. [Sun Jun 19 00:27:21 2011] [info] mod_wsgi (pid=23239): Attach interpreter ''. [Sun Jun 19 00:27:31 2011] [info] mod_wsgi (pid=23238): Create interpreter 'wsgitest.local|'. [Sun Jun 19 00:27:31 2011] [info] [client 192.168.1.205] mod_wsgi (pid=23238, process='', application='wsgitest.local|'): Loading WSGI script '/home/wwwhost/pydocs/$ [Sun Jun 19 00:27:50 2011] [info] mod_wsgi (pid=23239): Create interpreter 'wsgitest.local|'. Has anybody ever managed to make mod_wsgi run on a non-system default version of python ?

    Read the article

  • Git, auto updating, security and tampering?

    - by acidzombie24
    I was thinking about hosting my private project on my server (i may use 'gitolite') and have a copy on my local machine as backup (git clone then automated git fetch every few minute). I want to know what happens if there is a bug gitolite or somewhere else on my server and the source code and git repository has been tampered with? Will my backup also be corrupted? will i easily be able to revert the source using the history?

    Read the article

  • Conflicting ip routes with local table on attaching a virtual network interface

    - by user1071840
    I have an EC2 instance with these ip rules: $ sudo ip rule show 0: from all lookup local 32766: from all lookup main 32767: from all lookup default I can attach an elastic network interface to it with a private IP. Say the IP of my machine is 10.1.3.12 and the IP of the interface is 10.1.1.190. As soon as I attach the interface to my machine a new entry is added to the routing policy and local routing table: sudo ip rule show 0: from all lookup local 32765: from 10.1.1.190 lookup 10003 32766: from all lookup main 32767: from all lookup default $ sudo ip route show table local broadcast 10.1.1.0 dev eth3 proto kernel scope link src 10.1.1.190 local 10.1.1.190 dev eth3 proto kernel scope host src 10.1.1.190 broadcast 10.1.1.255 dev eth3 proto kernel scope link src 10.1.1.190 broadcast 10.1.3.0 dev eth0 proto kernel scope link src 10.1.3.12 local 10.1.3.12 dev eth0 proto kernel scope host src 10.1.3.12 broadcast 10.1.3.255 dev eth0 proto kernel scope link src 10.1.3.12 broadcast 127.0.0.0 dev lo proto kernel scope link src 127.0.0.1 local 127.0.0.0/8 dev lo proto kernel scope host src 127.0.0.1 local 127.0.0.1 dev lo proto kernel scope host src 127.0.0.1 broadcast 127.255.255.255 dev lo proto kernel scope link src 127.0.0.1 I can send traffic to this ENI directly from a host that can have the same IP as the host the ENI is attached to. This is where the problem starts. I ran tcpdump on the port in question and saw multiple SYNs going to the ENI with src '10.1.3.12' and destination '10.1.1.190' but didn't see even a single ACK. In my understanding if ACKs were being sent from the ENI they'd have destination as 10.1.3.12 i.e. the same as the local machine's IP and such packets will now be routed as local packets matching local routing policy: local 10.1.3.12 dev eth0 proto kernel scope host src 10.1.3.12 I'd like to send all the packets originating from 10.1.1.190 (my ENI) to go back on the same interface i.e. eth3 in this case. Contents of the nee table 10003 are: $ sudo ip route show table 10003 default via 10.1.1.1 dev eth3 I think I can do the following: I don't know if its possible but probably decrease the priority of local table so the packets match the table 10003. Use iptables to mangle these packets and update the local table route to include the mark information But I'm not sure if these are the right approaches.

    Read the article

  • Over-scan Issues when using HDTV through VGA

    - by RPG Master
    Right now all we can do is set the TV to 1280x768 instead of its native resolution of 1360x768. Setting it to its native resolution gives you a screen with a large portion of the left side of the screen cut off. We've tried everything with the TV so now we're turning to the innards of Ubuntu in hopes of fixing this. The computer is using an NVIDIA GeForce GT240. This is its current xorg.conf: # nvidia-settings: X configuration file generated by nvidia-settings # nvidia-settings: version 1.0 (buildd@palmer) Fri Apr 9 10:35:18 UTC 2010 Section "ServerLayout" Identifier "Layout0" Screen 0 "Screen0" 0 0 InputDevice "Keyboard0" "CoreKeyboard" InputDevice "Mouse0" "CorePointer" Option "Xinerama" "0" EndSection Section "Files" EndSection Section "InputDevice" # generated from default Identifier "Mouse0" Driver "mouse" Option "Protocol" "auto" Option "Device" "/dev/psaux" Option "Emulate3Buttons" "no" Option "ZAxisMapping" "4 5" EndSection Section "InputDevice" # generated from default Identifier "Keyboard0" Driver "kbd" EndSection Section "Monitor" # HorizSync source: builtin, VertRefresh source: builtin # HorizSync 28.0 - 55.0 # VertRefresh 43.0 - 72.0 Identifier "Monitor0" VendorName "Unknown" ModelName "CRT-0" HorizSync 28.0 - 55.0 VertRefresh 43.0 - 72.0 Option "DPMS" EndSection Section "Device" Identifier "Device0" Driver "nvidia" VendorName "NVIDIA Corporation" BoardName "GeForce 6600" EndSection Section "Screen" # Removed Option "metamodes" "1360x768 +0+0; 800x600 +0+0" Identifier "Screen0" Device "Device0" Monitor "Monitor0" DefaultDepth 24 Option "TwinView" "0" Option "TwinViewXineramaInfoOrder" "CRT-0" Option "metamodes" "1360x768 +0+0" SubSection "Display" Depth 24 EndSubSection EndSection

    Read the article

  • Google and Semantic Search Engine Optimization (SEO)

    Semantic Search Engine Optimization is a new frontier for SEO experts who want to stay ahead of the Google curve in securing additional search engine rankings for their target search terms. 'Semantic SEO' is currently quite misunderstood in the SEO community. Once understood, the proper application of a Semantic SEO strategy for your web site (and for your clients) can pay big dividends in improving your on-page copy, page headings, anchor text and internal linking, and deliver increased site traffic for search engine queries containing alternate word meanings.

    Read the article

  • Is there application which is fakes browser and allows to choose what real to use if url provided

    - by Dzmitry Lahoda
    Is there any Application for Windows to do next think: I click url in Skype or html file in Explorer. Application is default "fake" browser, i.e. registered as default browser. Application shows several buttons. Each button represents installed or running browser. I can choose real browser, click it and specific url opened in chosen real browser . Quick search not revealed such Application. Context: I work in environment where some sites work in specific browsers. I get clickable urls from different applications. Sometimes I want to launch specific browser to use specific addin of it against url provided. I have specific portable "secured" browser I want to launch only for trusted sites.

    Read the article

  • Why is it good to have website content files on a separate drive other than system (OS) drive?

    - by Jeffrey
    I am wondering what benefits will give me to move all website content files from the default inetpub directory (C:) to something like D:\wwwroot. By default IIS creates separate application pool for each website and I am using the built-in user and group (IURS) as the authentication method. I’ve made sure each site directory has the appropriate permission settings so I am not sure what benefits I will gain. Some of the environment settings are as below: VMWare Windows 2008 R2 64 IIS 7.5 C:\inetpub\site1 C:\inetpub\site2 Also as this article (moving the iis7 inetpub directory to a different drive) points out, not sure if it's worth the trouble to migrate files to a different drive: PLEASE BE AWARE OF THE FOLLOWING: WINDOWS SERVICING EVENTS (I.E. HOTFIXES AND SERVICE PACKS) WOULD STILL REPLACE FILES IN THE ORIGINAL DIRECTORIES. THE LIKELIHOOD THAT FILES IN THE INETPUB DIRECTORIES HAVE TO BE REPLACED BY SERVICING IS LOW BUT FOR THIS REASON DELETING THE ORIGINAL DIRECTORIES IS NOT POSSIBLE.

    Read the article

  • LSB Script: how do i know if something goes wrong?

    - by ianaz
    How do I know if a LSB script fails to load or where do I check the log of the lsbs scripts? I added two scripts with the following command: update-rc.d scriptname defaults And just one launches the things I need. It does not seem to be a script error since if I launch it with /etc/init.d/scriptname it works. This is my script: #!/bin/bash ### BEGIN INIT INFO # Provides: nodes # Required-Start: $remote_fs $syslog # Required-Stop: $remote_fs $syslog # Default-Start: 2 3 4 5 # Default-Stop: 0 1 6 # Short-Description: Starts all node apps # Description: Starts all node apps like AAM, AMT,... ### END INIT INFO echo "Launch Node applications with forever" export PATH=/usr/local/bin:$PATH # Starts the redis server redis-server # Starts AAM forever -o /var/log/AAM.log -e /var/log/AAM.log --spinSleepTime 2000 -m 5 start /var/nodejs/AAM/app.js

    Read the article

  • Free eBook - Control Your Transaction Log so it Doesn't Control You

    Download your free copy of SQL Server Transaction Log Management and see why understanding how log files work can make all the difference in a crisis. Want to work faster with SQL Server?If you want to work faster try out the SQL Toolbelt. "The SQL Toolbelt provides tools that database developers as well as DBAs should not live without." William Van Orden. Download the SQL Toolbelt here.

    Read the article

  • How to display password policy information for a user (Ubuntu)?

    - by C.W.Holeman II
    Ubuntu Documentation Ubuntu 9.04 Ubuntu Server Guide Security User Management states that there is a default minimum password length for Ubuntu: By default, Ubuntu requires a minimum password length of 4 characters Is there a command for displaying the current password policies for a user (such as the chage command displays the password expiration information for a specific user)? > sudo chage -l SomeUserName Last password change : May 13, 2010 Password expires : never Password inactive : never Account expires : never Minimum number of days between password change : 0 Maximum number of days between password change : 99999 Number of days of warning before password expires : 7 This is rather than examining various places that control the policy and interpreting them since this process could contain errors. A command that reports the composed policy would be used to check the policy setting steps.

    Read the article

  • Keyword Selection

    As most marketers know from having experience on line, keyword selection is one of the top priorities to having your sales copy or article found in the search engines. Not only is this a top priority, it's the one thing you must spend time on to make sure you do have the keyword selection correct. You may ask.

    Read the article

  • Things to do After installing Visual Studio Express 12.

    - by Anirudha
    Originally posted on: http://geekswithblogs.net/anirugu/archive/2013/06/25/things-to-do-after-install-visual-studio-express-12.aspx  1. Environment > Document > Check the option for auto-load changes. By checking this option You can modified the file outside the VWD and VWD don’t tell you for confirmation. 2. Environment > Tabs and Windows > Preview Tab > uncheck the solution Explorer option. This option don’t show you the file when you just click on them in solution explorer. 3. Project and solutions > Check the track active item in solution explorer This option help you to easily figure out which file you working on and where it is in solution explorer . 4. text editor > all settings > word wrap > check this feature to enable word wrap 5. Text-editor > Css > formatting > check the compact rule. this option make you file smaller in size and easily to read. 6. Text-editor > html > miscellaneous > uncheck the auto ID option. Actually When you copy paste the html code Visual studio change their ID if ID is already exist. this option disable that feature. This is useful to do when we write if{} else {} statement and this is not helpful on that case. 7. Package manager > General > browse > copy the location of cache folder and in package source add them as source. This way you can use package that you have use earlier when you are offline. Thanks for read my post

    Read the article

  • FeedValidator & Feedburner get 404 when accessing wordpress RSS feeds when permalinks are enabled.

    - by Wazbaur
    I'm helping a friend set up a self-hosted Wordpress blog + feedburner and I'm seeing a problem with the feeds that I'm finding somewhat mysterious. Using the default permalink structure (e.g., ?p=123) everything works as expected; I can follow the feed in Google reader, navigate to it manually, and set it up in feedburner. However, once I switch away from the default permalink structure, feedburner and feedvalidator both report that accessing the feed is returning HTTP-404 and Google reader no longer shows new posts (I'm assuming for the same reason), but I can navigate to the feed using a browser. When I do that it appears as though nothing is wrong; there is a feed there and it contains all the posts I expect it to have. I've re-started the feedburner & reader set-up from the beginning after changing the link structure, so I don't think they're doing anything silly like looking at the feed at its old address. I've seen people with similar problems in various other places but there doesn't seem to be a good answer anywhere.

    Read the article

  • Inaccurate bandwidth limiting in altq queues

    - by overkordbaever
    I'm setting up an environment where I have one Linux server, one OpenBSD router and one Linux client and I want to be able to limit how much bandwidth the client should be able to use. I've been performing these tests with "netcat" and "time" (using time to measure the time of the transfer with netcat), and what happens when trying these tests (using the TCP protocol, the queues will for some reason not work with UDP) is that the queues aren't exact at all. For example: when setting a bandwidth limit of 10mbit, the client cannot use more than five mbits, when setting a limit of 100mbit, the client cannot use more than around 50mbit. The config looks like (using a 100mbit limit in the example): #queue rules altq on { $int_if, $ext_if } cbq bandwidth 100Mb queue { def, low } queue def bandwidth 0Mb cbq(default) queue low bandwidth 100Mb cbq(default) #Passrules test pass out quick from $int_if to $ext_if queue low pass in quick from $ext_if to $int_if queue low pass out quick from $ext_if to $int_if queue low pass in quick from $int_if to $ext_if queue low

    Read the article

  • Authenticate with Django 1.5

    - by gorjuce
    I'm currently testing django 1.5 and a custom User model, but I've some problems. I've created a User class in my account app, which looks like: class User(AbstractBaseUser): email = models.EmailField() activation_key = models.CharField(max_length=255) is_active = models.BooleanField(default=False) is_admin = models.BooleanField(default=False) USERNAME_FIELD = 'email' I can correctly register a user, who is stored in my account_user table. Now, how can I log in? I've tried with: def login(request): form = AuthenticationForm() if request.method == 'POST': form = AuthenticationForm(request.POST) email = request.POST['username'] password = request.POST['password'] user = authenticate(username=email, password=password) if user is not None: if user.is_active: login(user) else: message = 'disabled account, check validation email' return render( request, 'account-login-failed.html', {'message': message} ) return render(request, 'account-login.html', {'form': form}) I can correctly register a new User My forms.py which contains my register form class RegisterForm(forms.ModelForm): """ a form to create user""" password = forms.CharField( label="Password", widget=forms.PasswordInput() ) password_confirm = forms.CharField( label="Password Repeat", widget=forms.PasswordInput() ) class Meta: model = User exclude = ('last_login', 'activation_key') def clean_password_confirm(self): password = self.cleaned_data.get("password") password_confirm = self.cleaned_data.get("password_confirm") if password and password_confirm and password != password_confirm: raise forms.ValidationError("Password don't math") return password_confirm def clean_email(self): if User.objects.filter(email__iexact=self.cleaned_data.get("email")): raise forms.ValidationError("email already exists") return self.cleaned_data['email'] def save(self): user = super(RegisterForm, self).save(commit=False) user.password = self.cleaned_data['password'] user.activation_key = generate_sha1(user.email) user.save() return user My question is: Why does authenticate give me None? I know I'm trying to authenticate() with an email as username but is that not one of the reasons to use a custom User model?

    Read the article

  • Custom command in right-click menu not working

    - by Luke
    I have added, via the registry, a right click menu option for all filetypes which is supposed to get the MD5 checksum for a file. HKEY_CLASSES_ROOT*\shell\Checksum - Default: Get Checksum and HKEY_CLASSES_ROOT*\shell\Checksum\command - Default: checksum.cmd "%1" checksum.cmd simply clears the screen, calls fciv.exe using %1 and then pauses. Unfortunately, whilst the option "Get Checksum" appears correctly in the right click menu, it doesn't perform the right action when clicked. When I click it an "Open With" dialog opens, which is of course not what I want. Both checksum.cmd and fciv.exe are in the PATH. checksum.cmd: @echo off cls fciv.exe %1 pause Anybody know what's going on?

    Read the article

  • Wine causes twin view to break

    - by deanvz
    I have endlessly been playing around with the Nvidia X Server settings and changing my xorg.conf file to try and work for me and on most days its fine. In each instance I get it working for a while and then this morning the most bizarre thing happens. The moment I open any type of Wine program (which never use to be a problem) my Twin View setup disappears and am left with mirrored displays. I try and change the settings in the Nvidia driver, but its not interested and the screens remain mirrored. I have a work around. Restart my pc... Below are the contents of my current xorg.conf file. # nvidia-settings: X configuration file generated by nvidia-settings # nvidia-settings: version 295.33 (buildd@zirconium) Fri Mar 30 13:43:34 UTC 2012 Section "ServerLayout" Identifier "Layout0" Screen 0 "Screen0" 0 0 InputDevice "Keyboard0" "CoreKeyboard" InputDevice "Mouse0" "CorePointer" Option "Xinerama" "0" EndSection Section "Files" EndSection Section "InputDevice" # generated from default Identifier "Mouse0" Driver "mouse" Option "Protocol" "auto" Option "Device" "/dev/psaux" Option "Emulate3Buttons" "no" Option "ZAxisMapping" "4 5" EndSection Section "InputDevice" # generated from default Identifier "Keyboard0" Driver "kbd" EndSection Section "Monitor" # HorizSync source: edid, VertRefresh source: edid Identifier "Monitor0" VendorName "Unknown" ModelName "LG Electronics W1934" HorizSync 30.0 - 83.0 VertRefresh 56.0 - 75.0 Option "DPMS" EndSection Section "Device" Identifier "Device0" Driver "nvidia" VendorName "NVIDIA Corporation" BoardName "GeForce 9400 GT" EndSection Section "Screen" Identifier "Screen0" Device "Device0" Monitor "Monitor0" DefaultDepth 24 Option "TwinView" "1" Option "metamodes" "CRT-0: nvidia-auto-select +0+0, CRT-1: nvidia-auto-select +1440+0" SubSection "Display" Depth 24 EndSubSection EndSection

    Read the article

  • Why are there so many string classes in the face of std::string?

    - by fish
    It seems to me that many bigger C++ libraries end up creating their own string type. In the client code you either have to use the one from the library (QString, CString, fbstring etc., I'm sure anyone can name a few) or keep converting between the standard type and the one the library uses (which most of the time involves at least one copy). So, is there a particular misfeature or something wrong about std::string (just like auto_ptr semantics were bad)? Has it changed in C++11?

    Read the article

< Previous Page | 374 375 376 377 378 379 380 381 382 383 384 385  | Next Page >