Search Results

Search found 23783 results on 952 pages for 'edit and continue'.

Page 490/952 | < Previous Page | 486 487 488 489 490 491 492 493 494 495 496 497  | Next Page >

  • Can I take a HDD in Raid1 and plug it straight into a different machine?

    - by jacko
    I would assume that I can just take my HDD out of my NAS (in raid1 mirror) and plug it into another enclosure and have it work off the bat but I'd like to make sure... Any ideas? Edit: My current setup is a Netgear ReadyNAS in (hardware) raid1. I'm hoping to replace this with a home theatre type PC (possibly running Ubuntu), and would like to migrate my data without having to do a bulk transfer over my network between the 2 machines. Can anyone confirm the case for the Netgear ReadyNAS?

    Read the article

  • How best to integrate PPA into Debian?

    - by eicos
    I'm working with ZFS on Linux, on my Debian squeeze server. I've found a useful package in an Ubuntu PPA, apparently by one of the ZoL developers, and I would like to integrate it into my package system. However, I am really having a terrible time doing this. It seems like it would be possible if I upgraded my system to the testing branch, but I'd prefer not to do this for obvious reasons. So, what is the One True Way to do this? Or, what is a passable way to do this, i.e. one that does not involve an ice nine-like assimilation of my entire system to testing branch? Edit: Silly question. I clicked the little green "technical information about this package" on launchpad and all was revealed.

    Read the article

  • Would hybrid drive work after SSD failure

    - by lulalala
    Hybrid hard drive combines SSD with traditional hard drives. I know that SSD can fail much often than traditional hard drives. So I want to ask that, when the SSD part of the hybrid drive fails, would I still be able to use the traditional hard drive? If it won't work like that, then I will consider add-in SATA cards instead, as it delegates risk much better. EDIT: I guess it differs from model to model, so if yes what models would work. (I am evaluating Seagate DX for now)

    Read the article

  • What is a good Foxit reader equivalent (or other PDF editor)?

    - by Yanick Rochon
    On Windows, I have found Foxit Reader to be quite handy when I need to highlight texts in PDF document, make annotations, etc. etc. Unfortunately, I have not yet found product as user friendly (which also does not corrupt PDF files...) and full-featured as Foxit software... Any recommendations? ** UPDATE ** I just tried the Open Office PDF import extension. It seems to work ok... If anyone used it for a while, I'd appreciate your feedback on that one. Thanks! ** UPDATE ** You can't highlight text with OpenOffice's PDF extension. Doesn't matter, I was reading this thread and found out about Xournal . As it turns out, it's in the repository. It does not natively save in PDF, but once all edits are done, the document can be exported to PDF (and overwrite the old one, just like Gimp with the native .XCE format and original PNG file, for example) I realize that this question is no longer a question in itself, but could be migrated to community wiki. However, feedbacks are still welcome! ** EDIT ** So... to close up this question, I have to say that I adopted Xournal . It is light and works pretty well, even on multi-page PDF documents. Thank you all for your answers!

    Read the article

  • A good log file analyzer for windows

    - by Raminder
    Is there a text-editor for windows that can open for me first n lines of a large file? It would be nice if it could also open a set of lines from the middle of the file. EDIT: Basically my requirement is that I want to analyze huge(2GB) log files. So any good tool that can open huge files with some analysis capabilities(searching, text highlighting etc.) would be nice. I like notepad++ but it wouldn't open a file even of about 650 MB. P.S. - Open source tools will be preferred.

    Read the article

  • How Do I Make A Bash Script for Git Checkin/Checkout?

    - by ServerChecker
    I was thinking of bringing up a git service on an Ubuntu server. However, the way me and another programmer operate -- we really want to try and stick to one person working on a project at a time. How would I make a Bash script to create a check in and check out with git? We want to prevent anyone from checking in code that hasn't already been checked in, and it should error out with the name of the person who has the code checked out. EDIT: I'm not really interested in using Git with its fantastic diff features. I move 100mph and don't have time to play diff games with the other developers. That's why we're using Git. If the other developers want to play the diff game, they can still do so. But when I check something out, I want it locked to everyone until I check it back in again.

    Read the article

  • Asynchrony in C# 5 (Part II)

    - by javarg
    This article is a continuation of the series of asynchronous features included in the new Async CTP preview for next versions of C# and VB. Check out Part I for more information. So, let’s continue with TPL Dataflow: Asynchronous functions TPL Dataflow Task based asynchronous Pattern Part II: TPL Dataflow Definition (by quote of Async CTP doc): “TPL Dataflow (TDF) is a new .NET library for building concurrent applications. It promotes actor/agent-oriented designs through primitives for in-process message passing, dataflow, and pipelining. TDF builds upon the APIs and scheduling infrastructure provided by the Task Parallel Library (TPL) in .NET 4, and integrates with the language support for asynchrony provided by C#, Visual Basic, and F#.” This means: data manipulation processed asynchronously. “TPL Dataflow is focused on providing building blocks for message passing and parallelizing CPU- and I/O-intensive applications”. Data manipulation is another hot area when designing asynchronous and parallel applications: how do you sync data access in a parallel environment? how do you avoid concurrency issues? how do you notify when data is available? how do you control how much data is waiting to be consumed? etc.  Dataflow Blocks TDF provides data and action processing blocks. Imagine having preconfigured data processing pipelines to choose from, depending on the type of behavior you want. The most basic block is the BufferBlock<T>, which provides an storage for some kind of data (instances of <T>). So, let’s review data processing blocks available. Blocks a categorized into three groups: Buffering Blocks Executor Blocks Joining Blocks Think of them as electronic circuitry components :).. 1. BufferBlock<T>: it is a FIFO (First in First Out) queue. You can Post data to it and then Receive it synchronously or asynchronously. It synchronizes data consumption for only one receiver at a time (you can have many receivers but only one will actually process it). 2. BroadcastBlock<T>: same FIFO queue for messages (instances of <T>) but link the receiving event to all consumers (it makes the data available for consumption to N number of consumers). The developer can provide a function to make a copy of the data if necessary. 3. WriteOnceBlock<T>: it stores only one value and once it’s been set, it can never be replaced or overwritten again (immutable after being set). As with BroadcastBlock<T>, all consumers can obtain a copy of the value. 4. ActionBlock<TInput>: this executor block allows us to define an operation to be executed when posting data to the queue. Thus, we must pass in a delegate/lambda when creating the block. Posting data will result in an execution of the delegate for each data in the queue. You could also specify how many parallel executions to allow (degree of parallelism). 5. TransformBlock<TInput, TOutput>: this is an executor block designed to transform each input, that is way it defines an output parameter. It ensures messages are processed and delivered in order. 6. TransformManyBlock<TInput, TOutput>: similar to TransformBlock but produces one or more outputs from each input. 7. BatchBlock<T>: combines N single items into one batch item (it buffers and batches inputs). 8. JoinBlock<T1, T2, …>: it generates tuples from all inputs (it aggregates inputs). Inputs could be of any type you want (T1, T2, etc.). 9. BatchJoinBlock<T1, T2, …>: aggregates tuples of collections. It generates collections for each type of input and then creates a tuple to contain each collection (Tuple<IList<T1>, IList<T2>>). Next time I will show some examples of usage for each TDF block. * Images taken from Microsoft’s Async CTP documentation.

    Read the article

  • Passenger_wsgi.py given precedence over DirectoryIndex?

    - by Walkerneo
    I was having an issue with my site today, where apache wasn't serving index.php by default. I had moved passenger_wsgi.py to the directory above document root so that I could serve python files without having to use PassengerAppRoot in the .htaccess file. I wanted to do this because I set up a development sub-domain on the site, and I wanted to use a different passenger_wsgi for the two domains, but that meant having different .htaccess files for the different PassengerAppRoots. Is there a way to have passenger_wsgi.py where it was and still let apache serve the index.phps? edit: I'm sorry, I'm tired. I just realized that the way this probably works is that passenger_wsgi.py is handling the routing instead of apache.

    Read the article

  • Should I format USB sticks and SD cards to FAT, FAT32, exFAT or NTFS? (Windows files, live Linux distors)

    - by superuser
    Does it depend on the media size which one to chose or on some other parameters? In Windows 7 FAT16 is the default. In pendrivelinux.com's Universal USB Installer FAT32. Which one to chose? How about NTFS for Windows use? How about exFAT? It is tne Microsoft designed filesystem for removable media. Is there a difference in USB sticks and SD cards in this regard? Edit: seeing developments in the other thread, should I still use something like exFAT if I don't want Recycle bins created on every single machine I plug my USB thumb drive in?

    Read the article

  • I can't get grub menu to show up during boot

    - by wim
    After trying (and failing) to install better ATI drivers in 11.10, I've somehow lost my grub menu at boot time. The screen does change to the familiar purple colour, but instead of a list of boot options it's just blank solid colour, and then disappears quickly and boots into the default entry normally. How can I get the bootloader back? I've tried sudo update-grub and also various different combinations of resolutions and colour depths in startupmanager application with no success (640x480, 1024x768, 1600x1200, 16 bits, 8 bits, 10 second delay, 7 second delay, 2 second delay...) edit: I have already tried holding down Shift during bootup and it does not seem to change the behaviour. I get the message "GRUB Loading" in the terminal, but then the place where the grub menu normally appears I get a solid blank magenta screen for a while. Here are the contents of /etc/default/grub # If you change this file, run 'update-grub' afterwards to update # /boot/grub/grub.cfg. # For full documentation of the options in this file, see: # info -f grub -n 'Simple configuration' GRUB_DEFAULT=0 GRUB_HIDDEN_TIMEOUT=0 GRUB_HIDDEN_TIMEOUT_QUIET=true GRUB_TIMEOUT=10 GRUB_DISTRIBUTOR=`lsb_release -i -s 2> /dev/null || echo Debian` GRUB_CMDLINE_LINUX_DEFAULT="quiet splash" GRUB_CMDLINE_LINUX=" vga=798 splash" # Uncomment to enable BadRAM filtering, modify to suit your needs # This works with Linux (no patch required) and with any kernel that obtains # the memory map information from GRUB (GNU Mach, kernel of FreeBSD ...) #GRUB_BADRAM="0x01234567,0xfefefefe,0x89abcdef,0xefefefef" # Uncomment to disable graphical terminal (grub-pc only) #GRUB_TERMINAL=console # The resolution used on graphical terminal # note that you can use only modes which your graphic card supports via VBE # you can see them in real GRUB with the command `vbeinfo' #GRUB_GFXMODE=640x480 # Uncomment if you don't want GRUB to pass "root=UUID=xxx" parameter to Linux #GRUB_DISABLE_LINUX_UUID=true # Uncomment to disable generation of recovery mode menu entries #GRUB_DISABLE_RECOVERY="true" # Uncomment to get a beep at grub start #GRUB_INIT_TUNE="480 440 1"

    Read the article

  • Why won't my graphics work in Ubuntu 12.04 LTS?

    - by user170974
    I'm very new to Ubuntu and to Linux in general, and took the leap and formatted my PC to Ubuntu 12.04 LTS very recently :) I seem to be having some trouble getting my graphics card to run properly, I looked over what information I could find but I still cannot get it up and running and figured this was a good place to ask for help. The information I can find on my graphics is as follows: (Terminal command) lspci outputs: 01:05.0 VGA compatible controller: Advanced Micro Devices, Inc. [AMD/ATI] RS880M [Mobility Radeon HD 4225/4250] 02:00.0 VGA compatible controller: Advanced Micro Devices, Inc. [AMD/ATI] Madison [Mobility Radeon HD 5650/5750 / 6530M/6550M] I tried using a mixture of the following links: How do I fix my installation of ATI Catalyst Video Driver in 12.04 LTS? What is the correct way to install ATI Catalyst Video Drivers (fglrx)? Ubuntu Precise Installation Guide But it does not seem to work, since running fglrxinfo in terminal gives: display: :0.0 screen: 0 OpenGL vendor string: VMware, Inc. OpenGL renderer string: Gallium 0.4 on llvmpipe (LLVM 0x301) OpenGL version string: 1.4 (2.1 Mesa 9.0.3) What am I doing wrong here? All help appreciated ;) Edit: I have tried the legacy driver from www2.ati.com/drivers/linux/amd-driver-installer-catalyst-13-4-linux-x86.x86_64.zip I also tried the guide at https://launchpad.net/~makson96/+archive/fglrx which caused the system to crash (blackscreen, no boot) Neither seemed to work. I did however reinstall ubuntu 12.04 LTS, and re-tried both with no success. Reintalling ubuntu did however fix the broken dependencies problems, etc.

    Read the article

  • Can I create a DC without a DNS Server?

    - by onik
    So as the title says, I need to promote a standalone Win2008R2 server to a Domain Controller, and I don't a DNS Server (I think), as there will be no clients connected to the domain, it will be only used for Remote Desktop Services. Yes, I know, it's considered bad practice to install other roles on the DC, but in this case, it's necessary. Do I need to install the DNS Server, and if I do, how to make it as transparent as possible? EDIT: Seems that I need to install the DNS Server, so I can I configure it not to mess up my entire domain? For example: The server I need to promote is rdc.mydomain.com, and it has an A entry to it's IP in the current DNS, while other servers under mydomain.com are running Linux and don't need to know anything about this Windows box. The domain uses a third-party DNS and all edits and updates need to be done via a separate web page, our servers don't have write/update access.

    Read the article

  • Getting Started with Cloud Computing

    - by juanlarios
    You’ve likely heard about how Office 365 and Windows Intune are great applications to get you started with Cloud Computing. Many of you emailed me asking for more info on what Cloud Computing is, including the distinction between "Public Cloud" and "Private Cloud". I want to address these questions and help you get started. Let's begin with a brief set of definitions and some places to find more info; however, an excellent place where you can always learn more about Cloud Computing is the Microsoft Virtual Academy. Public Cloud computing means that the infrastructure to run and manage the applications users are taking advantage of is run by someone else and not you. In other words, you do not buy the hardware or software to run your email or other services being used in your organization – that is done by someone else. Users simply connect to these services from their computers and you pay a monthly subscription fee for each user that is taking advantage of the service. Examples of Public Cloud services include Office 365, Windows Intune, Microsoft Dynamics CRM Online, Hotmail, and others. Private Cloud computing generally means that the hardware and software to run services used by your organization is run on your premises, with the ability for business groups to self-provision the services they need based on rules established by the IT department. Generally, Private Cloud implementations today are found in larger organizations but they are also viable for small and medium-sized businesses since they generally allow an automation of services and reduction in IT workloads when properly implemented. Having the right management tools, like System Center 2012, to implement and operate Private Cloud is important in order to be successful. So – how do you get started? The first step is to determine what makes the most sense to your organization. The nice thing is that you do not need to pick Public or Private Cloud – you can use elements of both where it makes sense for your business – the choice is yours. When you are ready to try and purchase Public Cloud technologies, the Microsoft Volume Licensing web site is a good place to find links to each of the online services. In particular, if you are interested in a trial for each service, you can visit the following pages: Office 365, CRM Online, Windows Intune, and Windows Azure. For Private Cloud technologies, start with some of the courses on Microsoft Virtual Academy and then download and install the Microsoft Private Cloud technologies including Windows Server 2008 R2 Hyper-V and System Center 2012 in your own environment and take it for a spin. Also, keep up to date with the Canadian IT Pro blog to learn about events Microsoft is delivering such as the IT Virtualization Boot Camps and more to get you started with these technologies hands on. Finally, I want to ask for your help to allow the team at Microsoft to continue to provide you what you need. Twice a year through something we call "The Global Relationship Study" – they reach out and contact you to see how they're doing and what Microsoft could do better. If you get an email from "Microsoft Feedback" with the subject line "Help Microsoft Focus on Customers and Partners" between March 5th and April 13th, please take a little time to tell them what you think. Cloud Computing Resources: Microsoft Server and Cloud Computing site – information on Microsoft's overall cloud strategy and products. Microsoft Virtual Academy – for free online training to help improve your IT skillset. Office 365 Trial/Info page – get more information or try it out for yourself. Office 365 Videos – see how businesses like yours have used Office 365 to transition to the cloud. Windows Intune Trial/Info – get more information or try it out for yourself. Microsoft Dynamics CRM Online page – information on trying and licensing Microsoft Dynamics CRM Online. Additional Resources You May Find Useful: Springboard Series Your destination for technical resources, free tools and expert guidance to ease the deployment and management of your Windows-based client infrastructure. TechNet Evaluation Center Try some of our latest Microsoft products for free, Like System Center 2012 Pre-Release Products, and evaluate them before you buy. AlignIT Manager Tech Talk Series A monthly streamed video series with a range of topics for both infrastructure and development managers. Ask questions and participate real-time or watch the on-demand recording. Tech·Days Online Discover what's next in technology and innovation with Tech·Days session recordings, hands-on labs and Tech·Days TV.

    Read the article

  • using Team foundation server

    - by joe
    I am using Team foundation server / visual studio as a client, to manage my visual foxpro source code. everytime i "checkout for edit" all my folders and files are read only and when i open my projects, it is like if the project is opened for the first time, asking me if i would like to set the current path as the default path... this is causing other serious problems as bad reference to "missing library files", which are actually not missing. i know Visual foxpro is old stuff...i hope someone who went through the same errors could help me out. thanks a lot guys

    Read the article

  • Nvidia GeForce Gt-520M-cn on intel dh61ww Ubuntu 12.04

    - by j goseeped
    hi people i hope you can help a little bit , i appreciate your time look: i have a this desktop i7 2600, 8gb ram ddr3, board intel dh61ww, Geforce Nvidia GT520-cn 2Gb ddr3, i just install ubuntu 64bits 12.04 kernel 3.2.0-23-generic , i want to setup two monitors samsung led 22" and get start mi video card 1) i download and installed nvidia driver 295.59 and also try with 302.17 to apt-update and upgrade, apt-get install build-essential linux-headers-$(uname -r), apt-get remove --purge nvidia*, apt-get remove --purge xserver-xorg-video-nouveau, vim /etc/modprobe.d/blacklist.conf blacklist vga16fb blacklist nouveau blacklist rivafb blacklist nvidiafb blacklist rivatv sh NVIDIA.run, sudo service lightdm start, reboot, nvidia-xorgconf 2)after reboot i get 800x600 and nvidia-settings say this. You do not appear to be using the NVIDIA X driver. Please edit your X configuration file (just run nvidia-xconfig as root), and restart the X server. 3) i change a little bit xorg.conf to set up a resolution to work property 4) i dont have any image in the monito and i dont have any option on Nvidia X server settings lspci | grep VGA 00:02.0 VGA compatible controller: Intel Corporation 2nd Generation Core Processor Family Integrated Graphics Controller (rev 09) 01:00.0 VGA compatible controller: NVIDIA Corporation GF119 [GeForce GT 520] (rev a1) egrep -i 'glx|nvidia' /var/log/Xorg.0.log [ 12.005] (II) LoadModule: "glx" [ 12.005] (II) Loading /usr/lib/xorg/modules/extensions/libglx.so [ 12.575] (II) Module glx: vendor="NVIDIA Corporation" [ 12.585] (II) NVIDIA GLX Module 302.17 Tue Jun 12 16:22:45 PDT 2012 [ 12.585] (II) Loading extension GLX [ 13.037] (EE) Failed to initialize GLX extension (Compatible NVIDIA X driver not found) [ 13.044] (II) config/udev: Adding input device HDA NVidia HDMI/DP,pcm=3 (/dev/input/event10) [ 13.044] (II) config/udev: Adding input device HDA NVidia HDMI/DP,pcm=7 (/dev/input/event9) glxinfo | grep direct Xlib: extension "GLX" missing on display ":0.0". Error: couldn't find RGB GLX visual or fbconfig sorry my english is no very well. and thanks guys

    Read the article

  • How do I make apt-get instal commands not display every package that is installing?

    - by rajlego
    When I install something on terminal, it often shows me a few things for status. For one, it shows download rate (which is fine). However, when I install something, it can display Unpacking libgranite2:amd64 (0.3.0~r732+pkg64~ubuntu0.3.1) ... Selecting previously unselected package slingshot-launcher. Preparing to unpack .../slingshot-launcher_0.7.6.1+r421+pkg32~ubuntu0.3.1_amd64.deb ... Unpacking slingshot-launcher (0.7.6.1+r421+pkg32~ubuntu0.3.1) ... Selecting previously unselected package contractor. Preparing to unpack .../contractor_0.3.1~r136+pkg22~ubuntu0.3.1_amd64.deb ... Unpacking contractor (0.3.1~r136+pkg22~ubuntu0.3.1) ... Selecting previously unselected package apport-hooks-elementary. Preparing to unpack .../apport-hooks-elementary_0.1-0~35~saucy1_all.deb ... Unpacking apport-hooks-elementary (0.1-0~35~saucy1) ... Processing triggers for hicolor-icon-theme (0.13-1) ... Processing triggers for libglib2.0-0:amd64 (2.40.0-2) ... Processing triggers for man-db (2.6.7.1-1) ... Setting up libgranite-common (0.3.0~r732+pkg64~ubuntu0.3.1) ... Setting up libgranite2:amd64 (0.3.0~r732+pkg64~ubuntu0.3.1) ... Setting up slingshot-launcher (0.7.6.1+r421+pkg32~ubuntu0.3.1) ... Setting up contractor (0.3.1~r136+pkg22~ubuntu0.3.1) ... Setting up apport-hooks-elementary (0.1-0~35~saucy1) ... Processing triggers for libc-bin (2.19-0ubuntu6) .. I would rather that not show up. I only want to see download rate, not all that other stuff. How do I do this? EDIT: I would also like the jargon to be stored somwehre else if something goes wrong, or for the jargon to just be expanable on terminal.

    Read the article

  • FreeBSD 8 and Samba 3.3 smb_panic

    - by scraft3613
    What is causing samba to crash? Need help diagnosing ... [2010/06/14 16:11:42, 0] lib/fault.c:fault_report(40) =============================================================== [2010/06/14 16:11:42, 0] lib/fault.c:fault_report(41) INTERNAL ERROR: Signal 11 in pid 951 (3.3.8) Please read the Trouble-Shooting section of the Samba3-HOWTO [2010/06/14 16:11:42, 0] lib/fault.c:fault_report(43) From: http://www.samba.org/samba/docs/Samba3-HOWTO.pdf [2010/06/14 16:11:42, 0] lib/fault.c:fault_report(44) =============================================================== [2010/06/14 16:11:42, 0] lib/util.c:smb_panic(1673) PANIC (pid 951): internal error [2010/06/14 16:53:40, 0] smbd/server.c:main(1274) Edit: A bit more info -- log.smbd: [2010/06/14 15:59:02, 0] smbd/server.c:main(1274) smbd version 3.3.8 started. Copyright Andrew Tridgell and the Samba Team 1992-2009 [2010/06/14 15:59:02, 0] printing/print_cups.c:cups_connect(103) Unable to connect to CUPS server localhost:631 - Connection refused [2010/06/14 15:59:02, 0] printing/print_cups.c:cups_connect(103) Unable to connect to CUPS server localhost:631 - Connection refused smb.conf [global] workgroup = WASH netbios name = PROD1 [media] path = /jon/media read only = no guest ok = yes

    Read the article

  • Light on every model and not in the whole scene

    - by alecnash
    I am using a custom shader and try to pass the effect on my Models like that: foreach (ModelMesh mesh in Model.Meshes) { foreach (ModelMeshPart part in mesh.MeshParts) { part.Effect = effect; } mesh.Draw(); } My only issue is that every Model now has its own light source in it. Why is this happening and is this a problem of my shader? Edit: These are the parameters passed to the shader: private void Get_lambertEffect() { if (_lambertEffect == null) _lambertEffect = Engine.LambertEffect; //Lambert technique (LambertWithShadows, LambertWithShadows2x2PCF, LambertWithShadows3x3PCF) _lambertEffect.CurrentTechnique = _lambertEffect.Techniques["LambertWithShadows3x3PCF"]; _lambertEffect.Parameters["texelSize"].SetValue(Engine.ShadowMap.TexelSize); //ShadowMap parameters _lambertEffect.Parameters["lightViewProjection"].SetValue(Engine.ShadowMap.LightViewProjectionMatrix); _lambertEffect.Parameters["textureScaleBias"].SetValue(Engine.ShadowMap.TextureScaleBiasMatrix); _lambertEffect.Parameters["depthBias"].SetValue(Engine.ShadowMap.DepthBias); _lambertEffect.Parameters["shadowMap"].SetValue(Engine.ShadowMap.ShadowMapTexture); //Camera view and projection parameters _lambertEffect.Parameters["view"].SetValue(Engine._camera.ViewMatrix); _lambertEffect.Parameters["projection"].SetValue(Engine._camera.ProjectionMatrix); _lambertEffect.Parameters["world"].SetValue( Matrix.CreateScale(Size) * world ); //Light and color _lambertEffect.Parameters["lightDir"].SetValue(Engine._sourceLight.Direction); _lambertEffect.Parameters["lightColor"].SetValue(Engine._sourceLight.Color); _lambertEffect.Parameters["materialAmbient"].SetValue(Engine.Material.Ambient); _lambertEffect.Parameters["materialDiffuse"].SetValue(Engine.Material.Diffuse); _lambertEffect.Parameters["colorMap"].SetValue(ColorTexture.Create(Engine.GraphicsDevice, Color.Red)); }

    Read the article

  • Ubuntu from console/command-line/shell

    - by Xolve
    Earlies linux distros though required lot of manual work they were quite good to use from commandline. If the X-server didn't start or you just want a shell to work they all supported. Network was configured by init; sound was up and ready; new devices inserted would be configured and their configureation was placed in fstab. Also there were small scripts I found on many distros which on X used windows while on console they switched to ncurses. But now this all needs GUI with a desktop manager (KDE, GNOME) for the new paradigms :'-( require GUI (NetworkManger, hal etc.). So if on just command line you have to be root, looks like they believe only geeky admins need that, and need to edit config files or type big commands. Any way so that this is easy in Ubnubtu through shell again.

    Read the article

  • UTF-8 bit representation

    - by Yanick Rochon
    I'm learning about UTF-8 standards and this is what I'm learning : Definition and bytes used UTF-8 binary representation Meaning 0xxxxxxx 1 byte for 1 à 7 bits chars 110xxxxx 10xxxxxx 2 bytes for 8 à 11 bits chars 1110xxxx 10xxxxxx 10xxxxxx 3 bytes for 12 à 16 bits chars 11110xxx 10xxxxxx 10xxxxxx 10xxxxxx 4 bytes for 17 à 21 bits chars And I'm wondering, why 2 bytes UTF-8 code is not 10xxxxxx instead, thus gaining 1 bit all the way up to 22 bits with a 4 bytes UTF-8 code? The way it is right now, 64 possible values are lost (from 1000000 to 10111111). I'm not trying to argue the standards, but I'm wondering why this is so? ** EDIT ** Even, why isn't it UTF-8 binary representation Meaning 0xxxxxxx 1 byte for 1 à 7 bits chars 110xxxxx xxxxxxxx 2 bytes for 8 à 13 bits chars 1110xxxx xxxxxxxx xxxxxxxx 3 bytes for 14 à 20 bits chars 11110xxx xxxxxxxx xxxxxxxx xxxxxxxx 4 bytes for 21 à 27 bits chars ...? Thanks!

    Read the article

  • Safari, IIS and optional Client Certificates

    - by Philipp
    I've a ASP.Net Webapp running on IIS7.5. The Webserver is configured to accept Client Certifcates. Unfortunately Visitors with Safari Browser are unable to view the Page. Same Problem as described under the following link: http://www.mnxsolutions.com/apache/safari-providing-an-ssl-error-client-certificate-rejected%E2%80%9D-when-other-browsers-work.html Does anyone knows how to solve this? I'd really appreciate your help. edit: Seems to be the same problem: http://superuser.com/questions/231695/iis7-5-ssl-question-safari-users-get-a-prompt-of-certificate-to-select

    Read the article

  • Slow write speeds on new Gigabit home file server

    - by Ryan Holder
    So I finally got all my parts delivered to setup a home file/backup server this week. It's currently running Ubuntu Server and I'm using Samba to share files on my network. The server currently has a 2TB WD Green drive in it connected to a Asus M5A78L-M This is then connected via CAT6a to my new Gigabit switch (TP-Link TL-SG1005D). My home desktop is then also connected to this switch and again also through CAT6a cable. Currently when transfering files I will get a perfect 100MB/s read from the server to my Windows machine. When copying from my Windows machine to the server I get around 30/38MB/s. I know this drive is capable is faster speeds so would anybody have an idea of where the bottleneck is? Any help would be greatly appreciated :) EDIT: I have found ftp's write speed is much closer to what my Samba read speed is so I'm going to give it a guess that is a software problem rather than hardware

    Read the article

  • something like persistent X forwarding?

    - by Arthur Ulfeldt
    I'm having trouble with the title on this one, please edit. When users connect to a VM with VNC/NX/RDP/other-tla they get a persistent desktop in a window . When they connect using ssh -X forwarding they get a local window managed by the local windo-manager that is not persistent. 1: is there a way to run a program on the VM and have it managed locally AND have it persistent? 2: can the client be on windows or OS-X? ps: in this case the vm's are running Ubuntu

    Read the article

  • Add iphones, ipads to existing OpenVPN server

    - by Zoran
    Could someone please provide me with info How to connect iphone and ipad to a existing OpenVPN server based on Ubuntu 8.04? I saw similar posts (such is Simplest VPN setup for iphone on Debian Linux?) but I haven't seen answer which will help me. Most of our client machines (whic are connecting to OpenVPN) are Windows 7 & Vista. Now I have to add several iphone and ipad users. How to accomplish that task? EDIT: One more thing, I can not use GuizmoVPN since that I wil have to jailbreak iphone which is not possoble solution

    Read the article

  • Whole continent simulation [on hold]

    - by user2309021
    Let's suppose I am planning to create a simulation of an entire continent at some point in the past (let's say, around 0 A.D). Is it feasible to spawn a hundred million actors that interact with each other and their environments? Having them reproduce, extract resources, etc? The fact is that I actually want to create a simulation that allows me to zoom in from a view of the entire continent up to a single village, and interact with it. (Think as if you could keep zooming in the campaign map of any Total War game and the transition to the battle map was seamless, not a change of the "game mode"). By the way, I have never made a game in my entire life (I have programmed normal desktop applications, though), so I am really having trouble wrapping my head around how to implement such a thing. Even while thinking about how to implement a simple population simulator, without a graphical interface, I think that the O(n) complexity of traversing an array and telling all people to get one year older each time the program ticks is kind of stupid. Any kind help would be greatly appreciated :) EDIT: After being put on hold, I shall specify a question. How would you implement a simulation of all basic human dynamics (reproduction, resource consumption) in an entire continent (with millions of people)?

    Read the article

< Previous Page | 486 487 488 489 490 491 492 493 494 495 496 497  | Next Page >