Search Results

Search found 24951 results on 999 pages for 'default scope'.

Page 239/999 | < Previous Page | 235 236 237 238 239 240 241 242 243 244 245 246  | Next Page >

  • Configure VPN to access remote LAN network on Windows7

    - by PiotrK
    Situation: I have two Windows7 machines (PC and laptop). I've set PC as VPN server and laptop as VPN client using default built-in W7 network tools. I've disabled use default gateway in remote network on client machine, so client don't try to route all communication through VPN. I've routed port 1723 (TCP/UDP) on NAT to my server and enabled IPSec/PPTP/L2TP passthrough I've put my laptop in indepedent network (basically I've connected it via 3G network), connected to VPN server and checked ipconfig /all I've get: IP Address: 192.168.1.101 Mask: 255.255.255.255 Gateway: (none) LAN mask in server LAN network is 255.255.255.0 - I am surely missing something obvious, but Google doesn't give me any good advices; How can I access local LAN network from remote VPN client? How can I access local shared documents?

    Read the article

  • Convert mkv to mp4 with ffmpeg

    - by JohnS
    When I try converting mkv to mp4 using ffmpeg, the following error occurs: [ipod @ 0x16fa0a0] Application provided invalid, non monotonically increasing dts to muxer in stream 0: -2 = -2 av_interleaved_write_frame(): Invalid argument I used this command to convert the file: ffmpeg -i input.mkv -vcodec copy -acodec copy -absf aac_adtstoasc output.m4v The input file has the following characteristics: mediainfo input.mkv General Unique ID : 200459305952356554213392832683163418790 (0x96CF0ED8DB5914CBB9E18163689280A6) Complete name : input.mkv Format : Matroska Format version : Version 2 File size : 1.46 GiB Duration : 1h 5mn Overall bit rate : 3 168 Kbps Encoded date : UTC 2010-09-26 21:44:02 Writing application : mkvmerge v2.9.5 ('Tu es le seul') built on Jun 17 2009 16:28:30 Writing library : libebml v0.7.8 + libmatroska v0.8.1 Video ID : 1 Format : AVC Format/Info : Advanced Video Codec Format profile : [email protected] Format settings, CABAC : Yes Format settings, ReFrames : 4 frames Codec ID : V_MPEG4/ISO/AVC Duration : 1h 5mn Bit rate : 2 910 Kbps Width : 1 280 pixels Height : 720 pixels Display aspect ratio : 16:9 Frame rate : 25.000 fps Color space : YUV Chroma subsampling : 4:2:0 Bit depth : 8 bits Scan type : Progressive Bits/(Pixel*Frame) : 0.126 Stream size : 1.31 GiB (90%) Writing library : x264 core 105 r1724 b02df7b Encoding settings : cabac=1 / ref=3 / deblock=1:0:0 / analyse=0x3:0x113 / me=hex / subme=6 / psy=1 / psy_rd=1.00:0.00 / mixed_ref=0 / me_range=16 / chroma_me=1 / trellis=1 / 8x8dct=1 / cqm=0 / deadzone=21,11 / fast_pskip=0 / chroma_qp_offset=-2 / threads=18 / sliced_threads=0 / nr=0 / decimate=1 / interlaced=0 / constrained_intra=0 / bframes=3 / b_pyramid=2 / b_adapt=1 / b_bias=0 / direct=3 / weightb=1 / open_gop=0 / weightp=0 / keyint=250 / keyint_min=25 / scenecut=40 / intra_refresh=0 / rc=2pass / mbtree=0 / bitrate=2910 / ratetol=1.0 / qcomp=0.60 / qpmin=10 / qpmax=51 / qpstep=4 / cplxblur=20.0 / qblur=0.5 / ip_ratio=1.40 / pb_ratio=1.30 / aq=1:1.00 Default : Yes Forced : No Audio ID : 2 Format : AC-3 Format/Info : Audio Coding 3 Mode extension : CM (complete main) Codec ID : A_AC3 Duration : 1h 5mn Bit rate mode : Constant Bit rate : 256 Kbps Channel(s) : 2 channels Channel positions : Front: L R Sampling rate : 48.0 KHz Bit depth : 16 bits Compression mode : Lossy Stream size : 121 MiB (8%) Language : English Default : Yes Forced : No Being new to ffmpeg, I'm not sure what the error means or how to correct it. Thanks!

    Read the article

  • Identity in .NET 4.5&ndash;Part 1: Status Quo (Beta 1)

    - by Your DisplayName here!
    .NET 4.5 is a big release for claims-based identity. WIF becomes part of the base class library and structural classes like Claim, ClaimsPrincipal and ClaimsIdentity even go straight into mscorlib. You will be able to access all WIF functionality now from prominent namespaces like ‘System.Security.Claims’ and ‘System.IdentityModel’ (yay!). But it is more than simply merging assemblies; in fact claims are now a first class citizen in the whole .NET Framework. All built-in identity classes, like FormsIdentity for ASP.NET and WindowsIdentity now derive from ClaimsIdentity. Likewise all built-in principal classes like GenericPrincipal and WindowsPrincipal derive from ClaimsPrincipal. In other words, the moment you compile your .NET application against 4.5,  you are claims-based. That’s a big (and excellent) change.   While the classes are designed in a way that you won’t “feel” a difference by default, having the power of claims under the hood (and by default) will change the way how to design security features with the new .NET framework. I am currently doing a number of proof of concepts and will write about that in the future. There are a number of nice “little” features, like FindAll(), FindFirst(), HasClaim() methods on both ClaimsIdentity and ClaimsPrincipal. This makes querying claims much more streamlined. I also had to smile when I saw ClaimsPrincipal.Current (have a look at the code yourself) ;) With all the goodness also comes a number of breaking changes. I will write about that, too. In addition Vittorio announced just today the beta availability of a new wizard/configuration tool that makes it easier to do common things like federating with an IdP or creating a test STS. Go get the Beta and the tools and start writing claims-enabled applications! Interesting times ahead!

    Read the article

  • Print driver installs failing

    - by Kasius
    All of the Windows 7 64-bit Enterprise machines in my organization are failing to install a good number of printer drivers that previously installed without issue. This only happens with printer drivers. And not with all printer drivers. Just some. Network drivers, video drivers, etc. have had no problems. Here is part of setupapi.dev.log for a Dymo LabelWriter printer driver that is failing to install: dvi: {Plug and Play Service: Device Install for USBPRINT\DYMOLABELWRITER_450_TURBO\6&538F51D&0&USB001} ump: Creating Install Process: DrvInst.exe 09:36:58.071 ndv: Infpath=C:\Windows\INF\oem0.inf ndv: DriverNodeName=dymo.inf:DYMO.NTamd64.6.0:LW_450_TURBO_VISTA:8.1.0.363:usbprint\dymolabelwriter_450_aa08 ndv: DriverStorepath=C:\Windows\System32\DriverStore\FileRepository\dymo.inf_amd64_neutral_3a631b118b7a5828\dymo.inf ndv: Building driver list from driver node strong name... dvi: Searching for hardware ID(s): dvi: usbprint\dymolabelwriter_450_aa08 dvi: dymolabelwriter_450_aa08 inf: Opened PNF: 'C:\Windows\System32\DriverStore\FileRepository\dymo.inf_amd64_neutral_3a631b118b7a5828\dymo.inf' ([strings]) dvi: Selected driver installs from section [LW_450_TURBO_VISTA] in 'c:\windows\system32\driverstore\filerepository\dymo.inf_amd64_neutral_3a631b118b7a5828\dymo.inf'. dvi: Class GUID of device changed to: {4d36e979-e325-11ce-bfc1-08002be10318}. dvi: Set selected driver complete. ndv: {Core Device Install} 09:36:58.133 inf: Opened INF: 'C:\Windows\INF\oem0.inf' ([strings]) inf: Saved PNF: 'C:\Windows\INF\oem0.PNF' (Language = 0409) dvi: {DIF_ALLOW_INSTALL} 09:36:58.164 dvi: Using exported function 'ClassInstall32' in module 'C:\Windows\system32\ntprint.dll'. dvi: Class installer == ntprint.dll,ClassInstall32 dvi: No CoInstallers found dvi: Class installer: Enter 09:36:58.164 dvi: Class installer: Exit dvi: Default installer: Enter 09:36:58.180 dvi: Default installer: Exit dvi: {DIF_ALLOW_INSTALL - exit(0xe000020e)} 09:36:58.180 ndv: Installing files... dvi: {DIF_INSTALLDEVICEFILES} 09:36:58.180 dvi: Class installer: Enter 09:36:58.180 inf: Opened INF: 'C:\Windows\System32\DriverStore\FileRepository\dymo.inf_amd64_neutral_3a631b118b7a5828\dymo.inf' ([strings]) inf: Opened INF: 'C:\Windows\System32\DriverStore\FileRepository\dymo.inf_amd64_neutral_3a631b118b7a5828\dymo.inf' ([strings]) !!! dvi: Class installer: failed(0x00000490)! !!! dvi: Error 1168: Element not found. dvi: {DIF_INSTALLDEVICEFILES - exit(0x00000490)} 09:37:22.063 ndv: Device install status=0x00000490 ndv: Performing device install final cleanup... ! ndv: Queueing up error report since device installation failed... ndv: {Core Device Install - exit(0x00000490)} 09:37:22.063 dvi: {DIF_DESTROYPRIVATEDATA} 09:37:22.063 dvi: Class installer: Enter 09:37:22.063 dvi: Class installer: Exit dvi: Default installer: Enter 09:37:22.063 dvi: Default installer: Exit dvi: {DIF_DESTROYPRIVATEDATA - exit(0xe000020e)} 09:37:22.063 ump: Server install process exited with code 0x00000490 09:37:22.063 ump: {Plug and Play Service: Device Install exit(00000490)} Notice these lines in particular: !!! dvi: Class installer: failed(0x00000490)! !!! dvi: Error 1168: Element not found. dvi: {DIF_INSTALLDEVICEFILES - exit(0x00000490)} 09:37:22.063 ndv: Device install status=0x00000490 From what I have read, the "Element not found" error should be accompanied by an event describing what element was not found. The error that appears in Device Manager is "The driver cannot be installed because it is either not digitally signed or not signed in the appropriate manner." It appears to be signed fine though. It has an accompanying .CAT file and worked previously. And when installing, the following messages are logged in setupapi.dev.log: sto: {DRIVERSTORE_IMPORT_NOTIFY_VALIDATE} 09:36:56.277 inf: Opened INF: 'C:\Windows\System32\DriverStore\Temp\{272e2305-961c-7942-9ede-966f01047043}\dymo.inf' ([strings]) sig: {_VERIFY_FILE_SIGNATURE} 09:36:56.292 sig: Key = dymo.inf sig: FilePath = C:\Windows\System32\DriverStore\Temp\{272e2305-961c-7942-9ede-966f01047043}\dymo.inf sig: Catalog = C:\Windows\System32\DriverStore\Temp\{272e2305-961c-7942-9ede-966f01047043}\DYMO.CAT sig: Success: File is signed in catalog. sig: {_VERIFY_FILE_SIGNATURE exit(0x00000000)} 09:36:56.355 sto: Validating driver package files against catalog 'DYMO.CAT'. sto: Driver package is valid. sto: {DRIVERSTORE_IMPORT_NOTIFY_VALIDATE exit(0x00000000)} 09:36:56.402 sto: Verified driver package signature: sto: Digital Signer Score = 0x0D000005 sto: Digital Signer Name = Microsoft Windows Hardware Compatibility Publisher Now here's where it gets strange. If I take it off the domain, it installs fine. But it doesn't seem to have anything to do with Group Policy. I moved the machine to an OU that blocks inheritance, ran a gpupdate, ran rsop.msc to verify, and tried again. And it still didn't work. Likewise, I removed a machine from the domain, manually set all of the domain Group Policy settings in gpedit.msc, and tried that way, and it worked fine. So it seems like the Group Policy settings are irrelevant. What other domain-related issue could be causing this though? Any ideas on what to try next would be greatly appreciated. I'm not sure where to go from here. Thanks!

    Read the article

  • Getting Started with Oracle Fusion Procurement

    Designed from the ground-up using the latest technology advances and incorporating the best practices gathered from Oracle's thousands of customers, Fusion Applications are 100 percent open standards-based business applications that set a new standard for the way we innovate, work and adopt technology. Delivered as a complete suite of modular applications, Fusion Applications work with your existing portfolio to evolve your business to a new level of performance. In this AppCast, part of a special series on Fusion Applications, you hear about the unique advantages of Fusion Procurement, learn about the scope of the first release and discover how Fusion Procurement modules can be used to complement and enhance your existing Procurement solutions.

    Read the article

  • SSMS Tools Pack 1.9.3 is out!

    - by Mladen Prajdic
    This release adds a great new feature and fixes a few bugs. The new feature called Window Content History saves the whole text in all all opened SQL windows every N minutes with the default being 30 minutes. This feature fixes the shortcoming of the Query Execution History which is saved only when the query is run. If you're working on a large script and never execute it, the existing Query Execution History wouldn't save it. By contrast the Window Content History saves everything in a .sql file so you can even open it in your SSMS. The Query Execution History and Window Content History files are correlated by the same directory and file name so when you search through the Query Execution History you get to see the whole saved Window Content History for that query. Because Window Content History saves data in simple searchable .sql files there isn't a special search editor built in. It is turned ON by default but despite the built in optimizations for space minimization, be careful to not let it fill your disk. You can see how it looks in the pictures in the feature list. The fixed bugs are: SSMS 2008 R2 slowness reported by few people. An object explorer context menu bug where it showed multiple SSMS Tools entries and showed wrong entries for a node. A datagrid bug in SQL snippets. Ability to read illegal XML characters from log files. Fixed the upper limit bug of a saved history text to 5 MB. A bug when searching through result sets prevents search. A bug with Text formatting erroring out for certain scripts. A bug with finding servers where it would return null even though servers existed. Run custom scripts objects had a bug where |SchemaName| didn't display the correct table schema for columns. This is fixed. Also |NodeName| and |ObjectName| values now show the same thing.   You can download the new version 1.9.3 here. Enjoy it!

    Read the article

  • lucid 10.04 LTS => Precise 12.04.1 : upgrade doesn't work

    - by Rastom
    I googled and looked into all unkown issues on ubuntu forums but I can't figure out why a 10.04 LTS server won't detect the last LTS 12.04.1. I guess since 12.04 is a fresh dist, not much is reported for related issues Here is what I did : apt-get update apt-get upgrade apt-get install update-manager-core it was already installed so no update for this package. I checked : /etc/update-manager/release-upgrades [DEFAULT] # Default prompting behavior, valid options: # # never - Never check for a new release. # normal - Check to see if a new release is available. If more than one new # release is found, the release upgrader will attempt to upgrade to # the release that immediately succeeds the currently-running # release. # lts - Check to see if a new LTS release is available. The upgrader # will attempt to upgrade to the first LTS release available after # the currently-running one. Note that this option should not be # used if the currently-running release is not itself an LTS # release, since in that case the upgrader won't be able to # determine if a newer release is available. Prompt=lts I also checked my sourcelist before running apt-get : /etc/apt/sources.list deb http://archive.ubuntu.com/ubuntu/ lucid main restricted universe multiverse deb http://archive.ubuntu.com/ubuntu/ lucid-security main restricted universe multiverse deb http://archive.ubuntu.com/ubuntu/ lucid-updates main restricted universe multiverse deb http://security.ubuntu.com/ubuntu lucid-security main restricted deb-src http://security.ubuntu.com/ubuntu lucid-security main restricted deb http://security.ubuntu.com/ubuntu lucid-security universe deb-src http://security.ubuntu.com/ubuntu lucid-security universe deb http://security.ubuntu.com/ubuntu lucid-security multiverse deb-src http://security.ubuntu.com/ubuntu lucid-security multiverse # deb http://landscape.canonical.com/packages/hardy ./ # deb-src http://landscape.canonical.com/packages/hardy ./ and then following Ubuntu guide for Precise upgrade the command below should work : root@xxxxxxxxx:/etc/apt# do-release-upgrade -d Checking for a new ubuntu release No new release found So am I missing something ? The server was accessing outside through a proxy but I grant direct access to this server to avoid any Internet access problem or redirection but no clue... Any help would be appreciated

    Read the article

  • Reinstalled Ubuntu 12.04 and now I cannot change preferences like theme, wallpaper, and nautilus preferences

    - by krishnab
    So I just down-rev'd ubuntu from 13.04 back to 12.04 LTS desktop 64 (Precise). I am using Unity. I just reformatted the Ubuntu partition, but kept my home directory intact, and everything seemed to reconnect just fine. No data was lost. However, I found that I cannot seem to change my preferences. So I cannot seem to change my desktop background, no matter how many ways I try--Ubuntu Tweak, Gnome Tweak, system settings. I also cannot change the system GTK+ theme, though apparently I am able to change the windows border theme. Further, I cannot seem to change my Nautilus preferences--so I cannot seem the make the default view a list view, and I cannot make the "single-click" behavior the default. I even went into the nautilus org.gnome.nautilus settings to manually change things, but no luck. I thought it was a permissions issue, so I did a chown on the home folder and on the .gvfs folder. Still no luck. So somewhere there seems to be a permission that I am not catching. Does anyone have any suggestions? Thanks.

    Read the article

  • Apache won't start after creating symbolic link

    - by Carlin
    I'm installing apache for the first time and trying to display some webpages on localhost. Apache's default path for serving web pages is /var/www/html/ but I don't have permissions to write there. Rather than change ownership of the entire directory, I decided to get rid of the /html/ folder in /var/www/ and created it in my home directory. Then I made a symbolic link ln -s /home/me/html/ /var/www/ hoping Apache would serve web pages from my home directory while keeping the default path and following the symbolic link to my home directory. When I go to start the apache service with service httpd start I get Job failed. See system journal and 'systemctl status' for details.

    Read the article

  • Create Site Definition in SharePoint2010 Part1

    - by ybbest
    Creating Site definition in Visual Studio2010 is very easy; it has a built-in Site Definition Project template. Today I will show you how to create Site Definition using Visual Studio 2010.You can download the complete source code here. Create Site Definition Project Choose the Site URL and note that you can only choose Farm Solution the sandboxed solution is greyed out. After the Visual Studio did his magic, you can see the Project structure as follows, it contains default.aspx which is the default page for that site definition, onet.xml contains the actions for creating site using this site template(This is like the elements.xml in the feature) and webtemp_Ybbest.CustomSiteDef.xml register this Site definition in the SharePoint(This is similar to the Feature.xml in feature) Open webtemp_Ybbest.CustomSiteDef.xml and modify ID field as it has to be unique and you can optionally choose to modify other fields as well. Deploy your solution and you can find your site template when you creating your site in either Central admin or Site Creation in Site Actions Menu      Create your site collection using your new site template and then navigate to the site you will find the site created by using your site definition.

    Read the article

  • How to efficiently preserve a really big navigation history in Firefox?

    - by brandizzi
    When I was using Mac, my Firefox stored items in its history for really long times. Sometimes I needed to find a link to a site I have seen two years ago and it found it! Also, the autocomplete in the Firefox bar is really great, so a long history and the autocompleting yield a wonderful feature to me. Unfortunately, it seems this does not happen in Ubuntu's Firefox. I looked for solutions but I just got some Firefox developers saying the option of expanding history is out for performance issues and one is well advised to not try to change it (which read to me as saying "we cannot make it work well so we limit the scope"). Anyway, my question is: is there a way of efficiently expand the size of Firefox history? Sorry for my bitterness, but a solution with strings attached (mostly say that I should not do it, like this addon) is not solution for me. Does someone have the same need of mine and found a solution?

    Read the article

  • LEMP Stack on Ubuntu Server 13.04 not parsing PHP Switch Statement Properly

    - by schester
    On my Ubuntu 12.04 Server LTS on nginx 1.1.19, the following PHP code works properly: switch($_SESSION['user']['permissions']) { case 9: echo "Super Admin Privileges"; break; case 0: echo "Operator Privileges"; break; case 1: echo "Line Leader Privileges"; break; case 2: echo "Supervisor Privileges"; break; case 3: echo "Engineer Privileges"; break; case 4: echo "Manager Privileges"; break; case 5: echo "Administrator Privileges"; break; default: echo "Operator Privileges"; } However, I have a backup server running Ubuntu Server 13.04 on nginx 1.4.1 which has the exact same copy of the script (synced) but instead of breaking on the break; command, it echos the whole php script. The output on the 12.04 Box is similar to this: You are logged in with Super Admin Privileges But on the 13.04 Box, the output is like this: You are logged in logged in with Super Admin Privileges"; break; case 0: echo "Operator Privileges"; break; case 1: echo "Line Leader Privileges"; break; case 2: echo "Supervisor Privileges"; break; case 3: echo "Engineer Privileges"; break; case 4: echo "Manager Privileges"; break; case 5: echo "Administrator Privileges"; break; default: echo "Operator Privileges"; } ?> I have also tried changing the script from switch statement to if statements but same results. Any idea what is wrong?

    Read the article

  • IIS Not Accepting Active Directory Login Credentials

    - by Dale Jay
    I have an ASP.NET web form using Microsoft's boilerplate Active Directory login page, set up exactly as suggested. Windows Authentication is activated on the "Default Website" and "MyWebsite" levels, and Domain\This.User is given "Allow" access to the site. After entering the valid credentials for This.User on the web form, a popup window appears asking me to enter my credentials yet again. Despite entering valid credentials for This.User (after attempting Domain\This.User and This.User formats), it rejects the credentials and returns an unauthorized security headers page (error 401.2). Active Directory user This.User is valid, the IP address of the AD server has been verified and SPN's have been set up for the server. Error Code: 0x80070005 Default Web Site security config: <system.web> <identity impersonate="true" /> <authentication mode="Windows" /> <customErrors mode="Off" /> <compilation debug="true" /> </system.web> Sub web site security <authentication mode="Windows"> <forms loginUrl="~/logon.aspx" timeout="2880" /> </authentication> <authorization> <deny users="?" /> <allow users="*" /> </authorization>

    Read the article

  • networking tunnel adapter connections?

    - by Karthik Balaguru
    I understand that Tunnel Adapter LAN is for encapsulating IPv6 packets with an IPv4 header so that they can be sent across an IPv4 network. Few queries popped up in my mind based on this :- If i do 'ipconfig', Apart from ethernet adapter LAN details, I get a series of statments as below - Tunnel adapter Local Area Connection* 6 Tunnel adapter Local Area Connection* 7 Tunnel adapter Local Area Connection* 12 Tunnel adapter Local Area Connection* 13 Tunnel adapter Local Area Connection* 14 Tunnel adapter Local Area Connection* 15 Tunnel adapter Local Area Connection* 16 Except for the *16, all the other Tunnel Adapter Local Area Connections show Media Disconnected. Why is the numbering for the Tunnel adapter LAN not sequential? It is like 6, 7, 12, 13, 14, 15, 16. A strange numbering scheme! I tried to figure it out by thinking of some arithmetic series. But, it does not seem to fit in. There is a huge gap between 7 and 12. Any ideas? What is the need for so many Tunnel Adapter LAN connections? Can you tell me a scenario that requires all of those ? I did ipconfig /all to get more information. From the listing, I understand that: 16, 15, 14, 12 are Microsoft 6to4 Adapters 13, 6 are isatap Adapters 7 is Teredo Tunneling Pseudo-interface I understand that the above are for automatic tunneling so that the tunnel endpoints are determined automatically by the routing infrastructure. 6to4 is recommended by RFC3056 for automatic tunneling that uses protocol 41 for encapsulation. It is typically used when an end-user wants to connect to the IPv6 Internet using their existing IPv4 connection. Teredo is an automatic tunneling technique that uses UDP encapsulation across multiple NATs. That is, It is to grant IPv6 connectivity to nodes that are located behind IPv6-unaware NAT devices ISATAP treats the IPv4 network as a virtual IPv6 local link, with mappings from each IPv4 address to a link-local IPv6 address. That is to transmit IPv6 packets between dual-stack nodes on top of an IPv4 network. That is, to put in simple words, ISATAP is an intra-site mechanism, while the 6to4 and Teredo are for inter-site tunnelling mechanisms. It seems that Teredo should alone enabled by default in Vista, But my system does not show it to be enabled by default. Interestingly, it shows a 6to4 tunnel adapter (Tunnel adapter LAN connection 16) to be enabled by default? Any specific reasons for it? If i do ipconfig /all, why is only one Teredo present while four 6to4 are present ? I searched the internet for answers to the above queries, but I am unable to find clear answers.

    Read the article

  • Keep your Root Authorities up to date

    - by John Breakwell
    Originally posted on: http://geekswithblogs.net/Plumbersmate/archive/2013/06/20/keep-your-root-authorities-up-to-date.aspxBy default, Windows will automatically update it’s internal list of trusted root authorities as long as the Update Root Certificates function is installed. This should be enabled by default and takes manual intervention to remove it. With this component enabled, the following happens: If you are presented with a certificate issued by an untrusted root authority, your computer will contact the Windows Update Web site to see if Microsoft has added the CA to its list of trusted authorities. If it has been added to the Microsoft list of trusted authorities, its certificate will automatically be added to your trusted certificate store. If the component is not installed and a certificate from an untrusted CA is encountered then the following text will be seen: This is an inconvenience for the person browsing the site as they need to click to continue. Applications, though, will be unable to proceed and will throw an exception. Example: ERROR_WINHTTP_SECURE_FAILURE 12175 (0x00002F8F) One or more errors were found in the Secure Sockets Layer (SSL) certificate sent by the server. If you look at the certificate’s properties, you can see the “Issued by:” value:   This must match a Trusted Root Certificate Authority in the current user’s certificate store.   So turn on automatic updating of trusted root authority certificates. For Windows Vista and above, this option is controlled through Group Policy. See the “To Turn Off the Update Root Certificates Feature by Using Group Policy” section of the following Technet article: Certificate Support and Resulting Internet Communication in Windows Vista If Windows Update is a blocked site then download and deploy the latest pack of root certificates from Microsoft: Update for Root Certificates For Windows XP [May 2013] (KB931125)   Failing that, find a machine that has the latest root certificates installed and export them from there: Open up the Certificates console. Right-click the required Trusted Root Certificate Authority certificate Choose Export from “All Tasks” to open up the Certificate Export Wizard Choose an export file format – DER should be fine Provide a file name and complete the export. Move the file to the machine that’s missing the certificate Right-click the file and choose “Install Certificate” to open up the Certificate Import Wizard Allow the wizard to automatically select the certificate store and complete the import On a side note, for troubleshooting certificate issues it can be helpful to clear the SSL state:

    Read the article

  • nginx regex locations w/ different roots not working as expected

    - by Wells Oliver
    I have the following two rules: location / { root /var/www/default; } location ~* /myapp(.*)$ { root /home/me/myapp/www; try_files $uri $uri/ /handle.php?url=$uri&$args; } When I browse to myapp/foo it works- kind of, the error is logged as a 404: *3 open() "/var/www/default/handle.php" failed (2: No such file or directory) - so its handling the regex match but just not using the right document root-- why is this? For the record, I am trying to get /myapp/* requests handled by the second location, and everything else the first.

    Read the article

  • How do I enable greedy MigrationHeuristic in Karmic?

    - by Matthew
    My laptop performs very poorly for 2d graphics. For example, Docky's zoom effect and compiz effects are very choppy. In Jaunty, I was able to fix this by adding the following line under the "Device" section in my xorg.conf: Option "MigrationHeuristic" "greedy" In Karmic, there is no xorg.conf by default, so I copied my old one (from Jaunty). However, everything is still slow. Here is my xorg.conf: Section "Device" Identifier "Configured Video Device" Option "MigrationHeuristic" "greedy" EndSection Section "Monitor" Identifier "Configured Monitor" EndSection Section "Screen" Identifier "Default Screen" Monitor "Configured Monitor" Device "Configured Video Device" EndSection From googling about, it sounds like "MigrationHeuristic" is only an option for the "EXA" mode, while Karmic has switched the intel driver to "UXA". So I tried adding this line under the "Device" section: Option "AccelMethod" "exa" But this didn't help.

    Read the article

  • Web Developer or Software Engineer?

    - by Deepesh
    A question that I have been asking myself and really confused which path to take. So I need your guys help as to the pros and cons of these 2 professions in today's world. I love web applications development as the Web is the best thing to happen in this age and nearly everyone gets by on the World Wide Web. And also tend to keep learning about new technologies and about web services. On the other hand I like software engineering also for the desktop applications as I have had experience with development small scale softwares in VB.Net, Java, C++, etc. Which path has more scope and better future? Whats your view?

    Read the article

  • XAMPP pointing a file outside root folder

    - by Ravi
    I am using XAMPP for first time in Mac. Running out problems accessing other than root folder(htdocs).when I am placing my web application inside htdocs with default httpd.conf file it works when I try to point my web application url in httpd.conf it throws error I am aware that to modify the root folder I need to do changes to my XAMPP/etc/httpd.conf file With Default XAMPP MAC Settings, I am trying to change Server root,Document root and Directory in XAMPP/etc/httpd.conf file the following ServerRoot "/Users/ravi/Documents/Development/Backbone/backboneboilerplate" DocumentRoot "/Users/ravi/Documents/Development/Backbone/backboneboilerplate" <Directory /> Options FollowSymLinks AllowOverride All Order deny,allow Deny from all </Directory> <Directory "/Users/ravi/Documents/Development/Backbone/backboneboilerplate"> Options Indexes FollowSymLinks AllowOverride All Order allow,deny Allow from all </Directory> its throwing error when trying to start XAMPP httpd: Syntax error on line 54 of /Applications/XAMPP/xamppfiles/etc/httpd.conf: Cannot load /Users/ravi/Documents/Development/Backbone/backboneboilerplate/modules/mod_authn_file.so into server: cannot create object file image or add library

    Read the article

  • C# Adds Optional and Named Arguments

    Earlier this month Microsoft released Visual Studio 2010, the .NET Framework 4.0 (which includes ASP.NET 4.0), and new versions of their core programming languages: C# 4.0 and Visual Basic 10. In designing the latest versions of C# and VB, Microsoft has worked to bring the two languages into closer parity. Certain features available in C# were missing in VB, and vice-a-versa. Last week I wrote about Visual Basic 2010's language enhancements, which include implicit line continuation, auto-implemented properties, and collection initializers - three useful features that were available in previous versions of C#. Similarly, C# 4.0 introduces new features to the C# programming language that were available in earlier versions of Visual Basic, namely optional arguments and named arguments. Optional arguments allow developers to specify default values for one or more arguments to a method. When calling such a method, these optional arguments may be omitted, in which case their default value is used. In a nutshell, optional arguments allow for a more terse syntax for method overloading. Named arguments, on the other hand, improve readability by allowing developers to indicate the name of an argument (along with its value) when calling a method. This article examines how to use optional arguments and named arguments in C# 4.0. Read on to learn more! Read More >

    Read the article

  • C# Adds Optional and Named Arguments

    Earlier this month Microsoft released Visual Studio 2010, the .NET Framework 4.0 (which includes ASP.NET 4.0), and new versions of their core programming languages: C# 4.0 and Visual Basic 10. In designing the latest versions of C# and VB, Microsoft has worked to bring the two languages into closer parity. Certain features available in C# were missing in VB, and vice-a-versa. Last week I wrote about Visual Basic 2010's language enhancements, which include implicit line continuation, auto-implemented properties, and collection initializers - three useful features that were available in previous versions of C#. Similarly, C# 4.0 introduces new features to the C# programming language that were available in earlier versions of Visual Basic, namely optional arguments and named arguments. Optional arguments allow developers to specify default values for one or more arguments to a method. When calling such a method, these optional arguments may be omitted, in which case their default value is used. In a nutshell, optional arguments allow for a more terse syntax for method overloading. Named arguments, on the other hand, improve readability by allowing developers to indicate the name of an argument (along with its value) when calling a method. This article examines how to use optional arguments and named arguments in C# 4.0. Read on to learn more! Read More >Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Apache and MySQL taking all the memory? Maximum connections?

    - by lpfavreau
    I've a had one of our servers going down (network wise) but keeping its uptime (so looks the server is not losing its power) recently. I've asked my hosting company to investigate and I've been told, after investigation, that Apache and MySQL were at all time using 80% of the memory and peaking at 95% and that I might be needing to add some more RAM to the server. One of their justifications to adding more RAM was that I was using the default max connections setting (125 for MySQL and 150 for Apache) and that for handling those 150 simultaneous connections, I would need at least 3Gb of memory instead of the 1Gb I have at the moment. Now, I understand that tweaking the max connections might be better than me leaving the default setting although I didn't feel it was a concern at the moment, having had servers with the same configuration handle more traffic than the current 1 or 2 visitors before the lunch, telling myself I'd tweak it depending on the visits pattern later. I've also always known Apache was more memory hungry under default settings than its competitor such as nginx and lighttpd. Nonetheless, looking at the stats of my machine, I'm trying to see how my hosting company got those numbers. I'm getting: # free -m total used free shared buffers cached Mem: 1000 944 56 0 148 725 -/+ buffers/cache: 71 929 Swap: 1953 0 1953 Which I guess means that yes, the server is reserving around 95% of its memory at the moment but I also thought it meant that only 71 out of the 1000 total were really used by the applications at the moment looking a the buffers/cache row. Also I don't see any swapping: # vmstat 60 procs -----------memory---------- ---swap-- -----io---- -system-- ----cpu---- r b swpd free buff cache si so bi bo in cs us sy id wa 0 0 0 57612 151704 742596 0 0 1 1 3 11 0 0 100 0 0 0 0 57604 151704 742596 0 0 0 1 1 24 0 0 100 0 0 0 0 57604 151704 742596 0 0 0 2 1 18 0 0 100 0 0 0 0 57604 151704 742596 0 0 0 0 1 13 0 0 100 0 And finally, while requesting a page: top - 08:33:19 up 3 days, 13:11, 2 users, load average: 0.06, 0.02, 0.00 Tasks: 81 total, 1 running, 80 sleeping, 0 stopped, 0 zombie Cpu(s): 1.3%us, 0.3%sy, 0.0%ni, 98.3%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 1024616k total, 976744k used, 47872k free, 151716k buffers Swap: 2000052k total, 0k used, 2000052k free, 742596k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 24914 www-data 20 0 26296 8640 3724 S 2 0.8 0:00.06 apache2 23785 mysql 20 0 125m 18m 5268 S 1 1.9 0:04.54 mysqld 24491 www-data 20 0 25828 7488 3180 S 1 0.7 0:00.02 apache2 1 root 20 0 2844 1688 544 S 0 0.2 0:01.30 init ... So, I'd like to know, experts of serverfault: Do I really need more RAM at the moment? How do they calculate that for 150 simultaneous connections I'd need 3Gb? Thanks for your help!

    Read the article

  • bash completion processing gone bad, how to debug?

    - by msw
    It all started with a simple alias gv='gvim --remote-quiet' and now gv Space Tab gives nothing where it normally should give filenames. Oddly, alias gvi='gvim --remote-quiet' works as expected. I clearly have a workaround, but I'd like to know what is catching my gv for special processing. compopt is no help as gv shares the same settings as ls which does filename completion correctly. $compopt gv compopt +o bashdefault +o default +o dirnames -o filenames +o nospace +o plusdirs gv $ compopt ls compopt +o bashdefault +o default +o dirnames -o filenames +o nospace +o plusdirs ls The complete command is slightly more helpful but it doesn't tell me why my two characters got singled out for alteration: $ complete -p gv complete -o filenames -F _filedir_xspec gv $ complete -p ls complete -o filenames -F _longopt ls $ complete -p echo bash: complete: echo: no completion specification $ alias gvi='gvim --remote-silent' msw@tallguy:~/.gnupg$ complete -p gvi bash: complete: gvi: no completion specification Where did complete -o filenames -F _filedir_xpec gv come from?

    Read the article

  • One page using querystring or many folders and pages?

    - by ClarkeyBoy
    I have an application where I have the 'core' code in one folder for which there is a virtual directory in the root, such that I can include any core files using /myApp/core/bla.asp. I then have two folders outside of this with a default.asp which currently use the querystring to define what page should be displayed. One page is for general users, the other will only be accessible to users who have permission to manage users / usergroups / permissions. The core code checks the querystring and then checks the permissions for that user. An example of this as it is now is default.asp?action=view&viewtype=list&objectid=server. I am not worried about SEO as this is an internal app and uses Windows Auth. My question is, is it better the way it is now or would it be better to have something like the following: /server/view/list/ /server/view/?id=123 /server/create/ /server/edit/?id=123 /server/remove/?id=123 In the above folders I would have a home page which defines all the variables which are currently determined by the querystring - in /server/create/ for example, I would define the action as 'create', object name as 'server' and so on. In terms of future development, I really have no idea which method would be best. I think the 2nd method would be best in terms of following what page does what but this is such a huge change to make at this stage that I would really like some opinions, preferably based on experience. PS Sorry if the tags are wrong - I am new to this forum and thought this was a bit too much of a discussion for StackOverflow as that is very much right / wrong answer based. I got the idea SE is more discussion based.

    Read the article

  • Running evrouter at boot with init.d, or after xserver starts

    - by J V
    I'm using evrouter to set up mouse button binds, and init.d to start it. My init.d file: #!/bin/bash #Simple init.d script to run evrouter ### BEGIN INIT INFO # Provides: evrouter # Default-Start: 2 3 4 5 # Default-Stop: 0 1 6 # Short-Description: Set evrouter bindings # Description: Set evrouter bindings at boot time. ### END INIT INFO config="/opt/hacks/evrouterrc" case "$1" in start|restart|reload|force-reload) evrouter -c "$config" /dev/input/event* ;; stop) echo "Evrouter is not a daemon, change settings file at '$config' and restart" ;; *) echo "Usage: $0 start" >&2 exit 3 ;; esac evrouter however complains that: evrouter: could not open display "". If evrouter requires xserver to be up, how do I get init to wait until after xserver starts to run this script? If xserver restarts will this script run automatically? Running this with sudo services evrouter start still results in this error, can init.d scripts not tell where my display is? (Not exactly familiar with init, runlevels, etc)

    Read the article

< Previous Page | 235 236 237 238 239 240 241 242 243 244 245 246  | Next Page >