Search Results

Search found 16560 results on 663 pages for 'far'.

Page 174/663 | < Previous Page | 170 171 172 173 174 175 176 177 178 179 180 181  | Next Page >

  • Essential roles for web application team

    - by jromero
    Some friends of mine came up with an idea for a web application which we (so far) think could be great. I made the analysis and all the early stages of the development process and I'm about to start the coding. I'm talking about something that is barely a mid-level project, so I consider one developer (myself) should be enough. The thing is that we are trying to assign roles to each one of us so we can be focused on our duties and have clear our responsibilities within the team. We are a crew of four people, three of us (my friends) are business people who would do the marketing, customer relationship, management and accounting stuff and I'm basically the developer. I have in mind to get them involved into the development process by giving them documentation to write and use them as testers, all of that besides the management duties they have. Perhaps someone out there have been in the same situation, so I would appreciate if the experience is shared so we can effectively give ourselves positions in the project based on what I explained above. Which are the essential roles or the optimal team layout so the idea can be developed successfully? The question is not strictly about programming, but it's related to build a software entrepreneurship beyond the code, that is something that I'm sure plenty of us are looking. Any help is really appreciated! Regards.

    Read the article

  • Designing a social network with CQRS, graph databases and relational databases in mind

    - by Siraj Mansour
    I have done quite an amount of research on the topic so far, but i couldn't come up with a conclusion to make up my mind. I am designing a social network and during my research i stumbled upon graph databases, i found neo4j pretty interesting for user relations and traversing through nodes. I also thought of using a relational database such as MS-SQL or MySQL to store entity data only and depending on neo4j for connections between entities. Of course this means more work in my application to store and pull data in and out of 2 different sources. My first question : Is using this approach (graph + relational) a good approach for designing my social network keeping in mind that users on social networks don't have to in synch with real data by split second ? What are the positives and negatives of this approach ? My Second question : I've been doing some reading on CQRS and as i understood it is mostly useful for collaborative environments, and environments where users see a lot of "stale" data. social networks has shared comments, events, etc .. and many users query or update the same data. Could CQRS be a helpful approach ? Would it give any performance/scalability benefits or non-useful complexity ? Is it fairly applicable with my possible choice of (graph + relational) databases approach mentioned in the question above ? My purpose is to know if the approaches i have mentioned above seem good enough for the business context.

    Read the article

  • IMAP/POP won't send allow emails to outside- New Dell PowerEdge 7310 running SBS 2011

    - by user779887
    I have a brand new out of the box Dell PowerEdge T310 running SBS 2011. Our employees at our remote offices can't send emails to recipients outside of our own domain. The workstations at the same location as the server aren't having any problem. I would at this time like to say "Thanks a lot" to the super-minds at Microsoft for protecting our email server from rogue computers attempting to send fake emails. (Silly me I thought proper login and password conventions would handle that.) I know this is something dealing with relaying but thus far nothing from any posts I've read have changed anything. Honestly, if someone is crafty enough to guess one of our login/password combos, let them send emails through our server I don't care!

    Read the article

  • Edirol UA-25 driver on Windows 8 Preview

    - by ETFairfax
    I've installed Windows 8 developer preview on a laptop and everything is working apart from my external sound card. The sound card in question is this one, and I've tried downloading and installing the Window 7 drivers from here but no joy so far. As per the instructions I remove the USB cable from the port. I then run setup.exe and (which I have set to run in Windows 7 compatibility mode). It starts doing it's stuff then says "Please insert the USB cable and wait"......that's where things stop! Nothing happens. I don't even here the "ding-dong" of the USB device being recoginised. Obviously I realise that the drivers are for windows 7 and not 8, but was hoping that someone would be able to crowbar it to work!!! Any tips appreciated. Thanks

    Read the article

  • Reflections based on distance from plane

    - by Andrea Benedetti
    Let's consider, for example, a surface like the volleyball court, we can see that legs and shoes of the players are reflected, with a blur effect, but body and stadium don't (as each object not near to the court). I've already made a reflection effect, but it works as a specular reflection, and I need to achieve an effect like the photo above. So, I would like to make a reflection that is based on the distance between the object and the plane, in this manner a close object would reflect more than an object that is positioned far away from the plane. What is the best way to achieve this effect? My first idea was to use the depth value (taken from the reflected camera), and use that value to blend between reflection and court. But I don't know if it's a correct way. Edit: as rendering engine I use Ogre that already provides a reflections system: reflecting the camera through a plane (obviously I can select the models to draw from the reflected camera). After a render to texture pass I can blend the reflected texture with the original plane. So, if possible, I'm looking for a way that best suits my system.

    Read the article

  • TP-Link TL-WA701N not working good as wireless extender

    - by djechelon
    I bought the device in subject to extend the range of my WPA2/PSK-protected wifi network powered by a TP-Link TL-WR340G device (AP+router). I configured it as follows: Operation mode: Universal Repeater MAC of AP: scanned my SSID and got it Channel width: 20MHz Security options: the same as the parent AP (WPA2/PSK with AES encryption) After configuration inSSIDer shows me two APs beaconing the same SSID at different SNRs (because I was with my laptop close to the extender). After a few hours my tablet, far from the parent AP, stopped working. I found that the scan reported two networks with the same SSID: one WPA-protected and one free at all. This happened very frequently. Rebooting the extender by unplugging it worked but this doesn't last long. Sometimes the extender stops transmitting at all, sometimes it beacons an open network to which nobody can connect (because there is no DHCP). What's wrong with my configuration?

    Read the article

  • Evoland: A Video Game About Video Game History

    - by Jason Fitzpatrick
    Browser-based Evoland is, hands down, one of the more clever video game concepts to come across our desk. The game itself is a history of video games–as you play the game the game evolves from a limited 8-bit monochrome adventure into a modern game. You start off unable to do anything but move right and collect a treasure chest. That treasure chest unlocks the left key (keys are configured in a WASD style keypad) which in turn allows you to move around a simple monochromatic forest clearing to unlock the rest of the movement keys. From there you begin unlocking more game features, effectively evolving the game from monochrome to 16 and then 64 bit color and unlocking various game play features. The game itself is short and can be played in about the same time you could watch a video covering the basics of various game changes over the last 30 years but actually playing the game and watching the evolution in progress is far more rewarding. Hit up the link below to take it for a spin. Evoland [via Boing Boing] How To Switch Webmail Providers Without Losing All Your Email How To Force Windows Applications to Use a Specific CPU HTG Explains: Is UPnP a Security Risk?

    Read the article

  • Difference between RPM (yum) and apt-get

    - by Josh K
    Functional difference between the two? Packages different style or what? I'm dipping my toe in the server pool and playing with an Ubuntu install right now, which is apt-get. I'm also considering FreeBSD and Debian if I do decide to start running my own VPS. So far things have been very easy, sudo apt-get install apache2 and the like with no issues at all. I'd like to know if there is a different learning curve to yum or variants.

    Read the article

  • Email attachments sent to a group don't show up for some

    - by blsub6
    My boss sent out an email from my Exchange 2010 org and attached a PDF and a Word doc to it. He came back the next day and told me that some of the 8 or 10 people that received this email could open up the attachments no problem. The other 2 or 3 people, could not. One of these people who could not open the attachment, went so far as to call Comcast (his email service provider) and ask them where his attachments went. Comcast told this person that when they received the email, the attachment was 0 bytes in size. This may sound like more of a rant than a question but I'm genuinely concerned. Is there any possible way that something could have gone wrong on my end that sent out the email to some with the attachment and to some without?

    Read the article

  • OpenVPN vs. IPSec - Pros and Cons, what to use?

    - by jens
    interestingly I have not found any good searchresults when searching for "OpenVPN vs IPSec": I need to setup a private LAN over an untrusted network. And as far as I know, both approaces seem to be valid. But I do not know which one is better. I would be very thankfull If you can list the pro's and con's of both approaches and maybe your suggestions and experiences what to use. Update (Regarding the comment/question): In my concrete case the goal is to have any number of Servers (with static IPs) be connected transparently with each other. But a small portion of "dynamic clients like road warriors" (with dynamic IPs) should also be able to connect. The main goal is however having a "transparent secure network" run top of untrusted network. I am quite a newbie so I do not know how to correctly interprete "1:1 Point to Point Connections" = The solution should support Broadcasts and all that stuff so it is a fully functional network... Thank you very much!! Jens

    Read the article

  • How to prove that an email has been sent?

    - by bguiz
    I have a dispute on my hands in which the other party (landlord's real estate agent) dishonestly claims to not have received an email that I truly did send. My questions is, what are the ways to prove that the email was indeed sent? Thus far, the methods that I have already thought of are: Screenshot of the mail in the outbox Forwarding a copy of the original email I am aware of other things like HTTP/ SMTP headers etc that would exist as well. Are these useful for my purposes, and if so how do I extract these? The email in question was sent using Yahoo webmail ( http://au.mail.yahoo.com/ ). Edit: I am not seeking legal advice here, just technical advice as to how to gather this information.

    Read the article

  • How to run Windows 7 Explorer shell with Administrator Privileges by default

    - by Barry Kelly
    The Windows 7 shell (Explorer) can be made to run with Administrator privileges by this manual process: Kill Explorer shell by holding down Shift+Ctrl, right-clicking the Shut down button in the Start Menu, and selecting Exit Explorer Start Task Manager with Ctrl+Shift+Esc Elevate Task Manager privileges by going to Processes tab and selecting Show processes from all users Then start up a new instance of the shell by File | Run in Task Manager, typing in explorer, and selecting the Create this task with administrative privileges. After following the above process, the Windows shell will be running with administrative privileges, and any programs it launches will also have administrative privileges. This makes performing tasks that require the privilege far easier, particularly for command-line applications, which usually fail silently or with an Access denied. message rather than giving an opportunity to use UAC to elevate the process's privileges. What I'm interested in, though, is creating an account which uses a privileged shell by default, rather than having to follow this laborious process every time. How can it be done?

    Read the article

  • Using ethernet cables?

    - by Farhad Yusufali
    I have an HDTV which supports internet connectivity through an ethernet port or wirelessly (but this requires an additional wireless USB adapter, which I don't have). I also have a blue ray player than supports internet connectivity through an ethernet port or wirelessly (natively - no other devices are needed). I want to connect my TV to the internet but my router is too far away. My blueray player however is already connected wirelessly. If I connected my TV to my Blueray player via ethernet, would my TV be able to connect to the internet? To clarify: Wireless Signal - Blue Ray Player - Ethernet Cable - TV Would this work?

    Read the article

  • Performance Optimization &ndash; It Is Faster When You Can Measure It

    - by Alois Kraus
    Performance optimization in bigger systems is hard because the measured numbers can vary greatly depending on the measurement method of your choice. To measure execution timing of specific methods in your application you usually use Time Measurement Method Potential Pitfalls Stopwatch Most accurate method on recent processors. Internally it uses the RDTSC instruction. Since the counter is processor specific you can get greatly different values when your thread is scheduled to another core or the core goes into a power saving mode. But things do change luckily: Intel's Designer's vol3b, section 16.11.1 "16.11.1 Invariant TSC The time stamp counter in newer processors may support an enhancement, referred to as invariant TSC. Processor's support for invariant TSC is indicated by CPUID.80000007H:EDX[8]. The invariant TSC will run at a constant rate in all ACPI P-, C-. and T-states. This is the architectural behavior moving forward. On processors with invariant TSC support, the OS may use the TSC for wall clock timer services (instead of ACPI or HPET timers). TSC reads are much more efficient and do not incur the overhead associated with a ring transition or access to a platform resource." DateTime.Now Good but it has only a resolution of 16ms which can be not enough if you want more accuracy.   Reporting Method Potential Pitfalls Console.WriteLine Ok if not called too often. Debug.Print Are you really measuring performance with Debug Builds? Shame on you. Trace.WriteLine Better but you need to plug in some good output listener like a trace file. But be aware that the first time you call this method it will read your app.config and deserialize your system.diagnostics section which does also take time.   In general it is a good idea to use some tracing library which does measure the timing for you and you only need to decorate some methods with tracing so you can later verify if something has changed for the better or worse. In my previous article I did compare measuring performance with quantum mechanics. This analogy does work surprising well. When you measure a quantum system there is a lower limit how accurately you can measure something. The Heisenberg uncertainty relation does tell us that you cannot measure of a quantum system the impulse and location of a particle at the same time with infinite accuracy. For programmers the two variables are execution time and memory allocations. If you try to measure the timings of all methods in your application you will need to store them somewhere. The fastest storage space besides the CPU cache is the memory. But if your timing values do consume all available memory there is no memory left for the actual application to run. On the other hand if you try to record all memory allocations of your application you will also need to store the data somewhere. This will cost you memory and execution time. These constraints are always there and regardless how good the marketing of tool vendors for performance and memory profilers are: Any measurement will disturb the system in a non predictable way. Commercial tool vendors will tell you they do calculate this overhead and subtract it from the measured values to give you the most accurate values but in reality it is not entirely true. After falling into the trap to trust the profiler timings several times I have got into the habit to Measure with a profiler to get an idea where potential bottlenecks are. Measure again with tracing only the specific methods to check if this method is really worth optimizing. Optimize it Measure again. Be surprised that your optimization has made things worse. Think harder Implement something that really works. Measure again Finished! - Or look for the next bottleneck. Recently I have looked into issues with serialization performance. For serialization DataContractSerializer was used and I was not sure if XML is really the most optimal wire format. After looking around I have found protobuf-net which uses Googles Protocol Buffer format which is a compact binary serialization format. What is good for Google should be good for us. A small sample app to check out performance was a matter of minutes: using ProtoBuf; using System; using System.Diagnostics; using System.IO; using System.Reflection; using System.Runtime.Serialization; [DataContract, Serializable] class Data { [DataMember(Order=1)] public int IntValue { get; set; } [DataMember(Order = 2)] public string StringValue { get; set; } [DataMember(Order = 3)] public bool IsActivated { get; set; } [DataMember(Order = 4)] public BindingFlags Flags { get; set; } } class Program { static MemoryStream _Stream = new MemoryStream(); static MemoryStream Stream { get { _Stream.Position = 0; _Stream.SetLength(0); return _Stream; } } static void Main(string[] args) { DataContractSerializer ser = new DataContractSerializer(typeof(Data)); Data data = new Data { IntValue = 100, IsActivated = true, StringValue = "Hi this is a small string value to check if serialization does work as expected" }; var sw = Stopwatch.StartNew(); int Runs = 1000 * 1000; for (int i = 0; i < Runs; i++) { //ser.WriteObject(Stream, data); Serializer.Serialize<Data>(Stream, data); } sw.Stop(); Console.WriteLine("Did take {0:N0}ms for {1:N0} objects", sw.Elapsed.TotalMilliseconds, Runs); Console.ReadLine(); } } The results are indeed promising: Serializer Time in ms N objects protobuf-net   807 1000000 DataContract 4402 1000000 Nearly a factor 5 faster and a much more compact wire format. Lets use it! After switching over to protbuf-net the transfered wire data has dropped by a factor two (good) and the performance has worsened by nearly a factor two. How is that possible? We have measured it? Protobuf-net is much faster! As it turns out protobuf-net is faster but it has a cost: For the first time a type is de/serialized it does use some very smart code-gen which does not come for free. Lets try to measure this one by setting of our performance test app the Runs value not to one million but to 1. Serializer Time in ms N objects protobuf-net 85 1 DataContract 24 1 The code-gen overhead is significant and can take up to 200ms for more complex types. The break even point where the code-gen cost is amortized by its faster serialization performance is (assuming small objects) somewhere between 20.000-40.000 serialized objects. As it turned out my specific scenario involved about 100 types and 1000 serializations in total. That explains why the good old DataContractSerializer is not so easy to take out of business. The final approach I ended up was to reduce the number of types and to serialize primitive types via BinaryWriter directly which turned out to be a pretty good alternative. It sounded good until I measured again and found that my optimizations so far do not help much. After looking more deeper at the profiling data I did found that one of the 1000 calls did take 50% of the time. So how do I find out which call it was? Normal profilers do fail short at this discipline. A (totally undeserved) relatively unknown profiler is SpeedTrace which does unlike normal profilers create traces of your applications by instrumenting your IL code at runtime. This way you can look at the full call stack of the one slow serializer call to find out if this stack was something special. Unfortunately the call stack showed nothing special. But luckily I have my own tracing as well and I could see that the slow serializer call did happen during the serialization of a bool value. When you encounter after much analysis something unreasonable you cannot explain it then the chances are good that your thread was suspended by the garbage collector. If there is a problem with excessive GCs remains to be investigated but so far the serialization performance seems to be mostly ok.  When you do profile a complex system with many interconnected processes you can never be sure that the timings you just did measure are accurate at all. Some process might be hitting the disc slowing things down for all other processes for some seconds as well. There is a big difference between warm and cold startup. If you restart all processes you can basically forget the first run because of the OS disc cache, JIT and GCs make the measured timings very flexible. When you are in need of a random number generator you should measure cold startup times of a sufficiently complex system. After the first run you can try again getting different and much lower numbers. Now try again at least two times to get some feeling how stable the numbers are. Oh and try to do the same thing the next day. It might be that the bottleneck you found yesterday is gone today. Thanks to GC and other random stuff it can become pretty hard to find stuff worth optimizing if no big bottlenecks except bloatloads of code are left anymore. When I have found a spot worth optimizing I do make the code changes and do measure again to check if something has changed. If it has got slower and I am certain that my change should have made it faster I can blame the GC again. The thing is that if you optimize stuff and you allocate less objects the GC times will shift to some other location. If you are unlucky it will make your faster working code slower because you see now GCs at times where none were before. This is where the stuff does get really tricky. A safe escape hatch is to create a repro of the slow code in an isolated application so you can change things fast in a reliable manner. Then the normal profilers do also start working again. As Vance Morrison does point out it is much more complex to profile a system against the wall clock compared to optimize for CPU time. The reason is that for wall clock time analysis you need to understand how your system does work and which threads (if you have not one but perhaps 20) are causing a visible delay to the end user and which threads can wait a long time without affecting the user experience at all. Next time: Commercial profiler shootout.

    Read the article

  • How do I setup an FTP server on Windows 7?

    - by Matt Frear
    I'm having trouble getting an FTP server setup on Windows 7. I've added the service using Control Panel - Programs - Turn Windows features on and off. I can see the service has started in Control Panel - Services. But then when I fire up a Windows command-line window, cmd, I get Not connected., C:\Users\mattf>ftp localhost ftp> ls Not connected. ftp> open localhost ftp> ls Not connected. ftp> dir Not connected. ftp> quit C:\Users\mattf> And that's as far as I've got. I have no idea why this isn't working - could it be firewall settings?

    Read the article

  • Getting the alternative to the 200-Line Linux Kernel patch to work

    - by Gödel
    Apparently, there is a comparable alternative to the 200-line kernel patch that involves no kernel upgrade. It is presented here and discussed here. However, I am not sure if webupd8's solution (under the section "Use it in Ubuntu") on Ubuntu actually works or not. In particular, one commenter on ./ is saying he's getting an error message. Could anyone post the "correct" method that actually works? Suggested solution: Based on the comments I've read so far, the following seems to work. (1) In /etc/rc.local, add the following lines to above exit 0: mkdir -p /dev/cgroup/cpu mount -t cgroup cgroup /dev/cgroup/cpu -o cpu mkdir -m 0777 /dev/cgroup/cpu/user echo "/usr/local/sbin/cgroup_clean" > /dev/cgroup/cpu/release_agent (2) Create a file named /usr/local/sbin/cgroup_clean with the following content: #!/bin/sh rmdir /dev/cgroup/cpu/$1 (3) In your ~/.bashrc, add: if [ "$PS1" ] ; then mkdir -m 0700 /dev/cgroup/cpu/user/$$ echo $$ > /dev/cgroup/cpu/user/$$/tasks echo "1" > /dev/cgroup/cpu/user/$$/notify_on_release fi (4) (To make sure the execution bit is on) execute sudo chmod +x /usr/local/sbin/cgroup_clean /etc/rc.local (5) Reboot.

    Read the article

  • Adding an ASP website in IIS7.5 on Windows 7

    - by birdus
    enter preformatted text hereI'm trying to add an ASP website under IIS 7.5 on Windows 7 and am having no luck so far. This site is just for me to hit locally. I need to make some changes to some of the HTML in some of the ASP files and I just need to be able to test my changes as I make them. I installed IIS and checked the box for ASP. Next, I added an Application Pool which I called ASP and which has "No Managed Code" and "ASP" set. Next, I added the website by right-clicking "Sites" then clicking "Add Web Site...". I gave it a name, set it to use the ASP app pool, pointed it to the path where the ASP code is (I left it at pass-through authentication), and typed in 5555 as the port, so as to not interfere with the default website. The code is sitting on my server and the path simply uses the mapped drive that I always use to access files on that drive array. When I type in http://mysite:5555, I get "could not find mysite:5555". I don't really know if all these settings are correct or what else I should try. What am I missing? Thanks, Jay

    Read the article

  • Geek Bike Ride at JavaOne 2012 - Pictures

    - by arungupta
    Following the tradition of JavaOne Latin America 2011, a gorgeous day in San Francisco marked the beginning of JavaOne 2012 with another Geek Bike Ride. About 50 Java developers got together this morning at Fisherman's Wharf and rode a bike along Marina, Crissy Field, Fort Mason, Golden Gate Bridge, and ultimately finishing in Sausalito downtown. This is a beautiful biking trail, mostly flat with a couple of good hills. Some folks even continued to Tiburon for an extra challenge. Check out map by Blazing Saddles for the exact course. They provide excellent bike rentals and a good service too! Here are some pictures from the day: Credits: Yoshio Terada And check out a video of bikers rolling down the hill: Credits: Yoshio Terada Thank you OTN for sponsoring the t-shirts! And Kevin Nilson, fearless leader of Silicon Valley JUG, for hosting the event! And now to main the conference starting tomorrow! Here is the evolving album for JavaOne 2012 so far ... And don't forget, I'm still recruiting runners for the Community Run on Oct 1 at 6:17am PT :-)

    Read the article

  • Running a small IPTV station

    - by nixterrimus
    I'm looking to run an iptv station for my dorm. I know I can serve multicast so that's not a problem. The station will serve out podcasts and other cc licensed content. The target endpoint is xbmc- a media center. So far I know that I need to serve an rtp stream over udp that's streaming an mpeg-4 avc main or high profile with aac ( or ac3 ?) audio. I've had some luck using vlc with vlm to stream but it seems limited. What are my other options?  Everything has to run on Linux- hopefully open source. How can I use playlists and not live streams? What are my software options?

    Read the article

  • Good ways to restart all the computers in a remote cluster?

    - by vgm64
    I have a cluster that I manage and from time to time I get emails from each node (and head node) begging to be restarted after an automatic upgrade. Currently, my best solution so far is a shell script like: $> cat cluster_reboot.sh ssh [email protected] reboot ssh [email protected] reboot ssh [email protected] reboot ssh [email protected] reboot ssh [email protected] reboot ssh [email protected] reboot I end up just typing the root password six times, but it works, I guess. Is there a better way? Can I force the head node to reboot the computers for me?

    Read the article

  • Share laptop Wi-Fi with router set up as access point via ethernet cables

    - by obie
    I have an ADSL modem which serves as a wireless router. There's a laptop connected to it through cable. On the other room I have a laptop which is connected through the router wirelessly. Since I have other appliances that need to get connected but are not Wi-Fi capable I have another Wi-Fi router. How can I share the second laptop's Wi-Fi with the second router so that the router can then serve as an access point to give internet to my other appliances through ethernet cables? I want something like this: INTERNETMODEM (WIFI)LAPTOP 1 (WIFI)LAPTOP 2 (WIFI)Router 2 (ETHERNET)DREAMBOX & WDTV What I've tried so far is opened the admin interface of router 2 then set static IP 192.168.100.13 and disabled the DHCP. The DHCP of the main router starts with 192.168.100.1 then I hooked up router 2 with the laptop 2 through Cat5 cable but for some reason it's not working.

    Read the article

  • UEFI/GPT Win 7 Load Failure in Dual Boot and no GRUB2 [Ubuntu 12.04]

    - by cristian_jordache
    Configuration: MBB: ASRock X79 Extreme6 Win 7 installed on a INTEL 40GB SSD (GPT partitioned) Ubuntu 14.04 on a CORSAIR 30GB SSD (Ext4 and SWAP) I had Windows 7 installed previously in UEFI mode, using 3 partitions (GPT) and works fine if left alone. In UEFI BIOS settings I can see sometimes a "Windows Boot Manager" and other times (?) a "UEFI Intel" entry for INTEL HDD and Windows will boot properly selecting the one available at that time. I installed Ubuntu 14.04 after Win 7 w/o changing any UEFI BIOS settings and it works fine only if the BIOS is set w/ the Ubuntu partition as the first drive to boot, in AHCI mode. If both SSD drives are connected, the Win7 Intel boot drive can be chosen as first boot device but only as an "AHCI Intel drive" (No "Windows Boot Manager" nor "UEFI Intel device" options available in BIOS Boot menu) and Win7 will not load properly as long as the Ubuntu Crucial SSD is NOT PHYSICALLY DISCONNECTED. Windows will try, start booting for few seconds but will fail replacing Win7 logo and that startup animation with w/ the "old" white progress bar and then and will notify that there is a issue and prompt the user to try to Load Win 7 in Normal Mode again or try a Recovery Mode to fix it. If I let Windows INTEL HDD boot via BIOS/UEFI - Windows Boot manager selection, I may see the purple screen of Grub2 loaded for a while, but there's no selection for Ubuntu or Windows and/or then machine is not booting, showing a black screen and a small command prompt cursor blinking on top. So far the only option I see to have Ubuntu boot side by side w/ Win 7 is to reformat the Win7 SDD and set it boot in legacy BIOS mode with a MBR instead of GPT. Per my understanding this is a quite complex issue to fix (Rod Smith's answer was pretty helpful: UEFI boot on my Asus k52f) but any other suggestions are welcome. I find a bit odd that I can boot properly Windows7 SSD or an Ubuntu DVD using a DVD drive set in UEFI-BIOS in "AHCI mode" and w/ using "UEFI/Windows Boot Manager" booting option but I cannot boot a secondary SSD-HDD w/ Ubuntu having the same BIOS/UEFI Boot configuration. Looks like plugging the second SSD [the Ubuntu partition] is interfering with boot options in UEFI-BIOS.

    Read the article

  • Wired and Wireless USB Network Adapter

    - by Evan M.
    Does anyone know if there are any USB network adapters out there that have both wired and wireless networking capability? Far-fetched, I know, but thought I'd ask. Background: Some of our users have locked down laptops that we also include unlocked virtual machines running on VMware Player. Sometimes the users have a need for network connectivity with their VMs where NAT and bridged networking from the host won't work. To supplement this, we want to supply them with adapters that they can use VMware USB pass through capabilities to provide appropriate connectivity. They will need both wired and wireless capability. Rather than carrying around 2 adapters, was hoping we could get a combination unit so that we can reduce it to 1. Thanks

    Read the article

  • Depth interpolation for z-buffer, with scanline

    - by Twodordan
    I have to write my own software 3d rasterizer, and so far I am able to project my 3d model made of triangles into 2d space: I rotate, translate and project my points to get a 2d space representation of each triangle. Then, I take the 3 triangle points and I implement the scanline algorithm (using linear interpolation) to find all points[x][y] along the edges(left and right) of the triangles, so that I can scan the triangle horizontally, row by row, and fill it with pixels. This works. Except I have to also implement z-buffering. This means that knowing the rotated&translated z coordinates of the 3 vertices of the triangle, I must interpolate the z coordinate for all other points I find with my scanline algorithm. The concept seems clear enough, I first find Za and Zb with these calculations: var Z_Slope = (bottom_point_z - top_point_z) / (bottom_point_y - top_point_y); var Za = top_point_z + ((current_point_y - top_point_y) * Z_Slope); Then for each Zp I do the same interpolation horizontally: var Z_Slope = (right_z - left_z) / (right_x - left_x); var Zp = left_z + ((current_point_x - left_x) * Z_Slope); And of course I add to the zBuffer, if current z is closer to the viewer than the previous value at that index. (my coordinate system is x: left - right; y: top - bottom; z: your face - computer screen;) The problem is, it goes haywire. The project is here and if you select the "Z-Buffered" radio button, you'll see the results... (note that the rest of the options before "Z-Buffered" use the Painter's algorithm to correctly order the triangles. I also use the painter's algorithm -only- to draw the wireframe in "Z-Buffered" mode for debugging purposes) PS: I've read here that you must turn the z's into their reciprocals (meaning z = 1/z) before you interpolate. I tried that, and it appears that there's no change. What am I missing? (could anyone clarify, precisely where you must turn z into 1/z and where to turn it back?)

    Read the article

  • How to deploy local project to Amazon

    - by Nai
    I have a small webapp written in Python/Django which works fine on my local machine. I've been tinkering and setting up my server on the free tier of Amazon EC2 by following online tutorials. However, the tutorials I have found so far shows you how to setup your instance and stops there. So my question is, how do I get my local webapp onto my Amazon instance? FYI, I'm a sys admin/web dev. noob. Thanks.

    Read the article

< Previous Page | 170 171 172 173 174 175 176 177 178 179 180 181  | Next Page >