Search Results

Search found 40998 results on 1640 pages for 'setup project'.

Page 936/1640 | < Previous Page | 932 933 934 935 936 937 938 939 940 941 942 943  | Next Page >

  • How do I get Windows 7 To Recognize a newly installed RAID 5 volume?

    - by GregH
    I had a previously running Windows 7 (64 bit) system. I added 3 new 1TB Seagate drives that I set up as a RAID 5 volume. I have a Gigabyte GA-P55M-UD2 motherboard. I installed the drives, set up the BIOS and configured the three drives as a RAID volume through the RAID setup utility that was accessed via Ctrl-I while the system was booting. I rebooted the system and could see the drives during the boot sequence. However, when Windows 7 was starting I got an error (quick blue screen) and then Windows tried to repair itself with no success. Do I need to install RAID drivers in Windows? How do I do it if Windows won't boot? Thanks in advance.

    Read the article

  • Windows XP does not list WPA wireless networks

    - by Tomalak
    What can be the reason that Windows XP does not show WPA-encrypted wireless networks? The laptop I have problems with is an older model (Toshiba Satellite Pro 6100) with Windows XP SP3 on it, fresh install. The wireless network card in it is an Agere product that lists as "Toshiba Wireless LAN Mini PCI Card". The networks showed up perfectly before I first tried to connect to one (it was set to WPA2). The connection failed (the card supports WPA only), then something must have happend and Windows hides these networks now. A manually configured WPA setup via Windows' own wizard works, I'm using it right now. The network just won't show up in the list of available network on its own. I suspect that XP incorrectly set a flag somewhere that this network card does not support WPA. Is there such a flag, and if so, how can I change it back?

    Read the article

  • Sending cron output to a file with a timestamp in its name

    - by Philip Morton
    I have a crontab like this on a LAMP setup: 0 0 * * * /some/path/to/a/file.php > $HOME/cron.log 2>&1 This writes the output of the file to cron.log. However, when it runs again, it overwrites whatever was previously in the file. How can I get cron to output to a file with a timestamp in its filename? An example filename would be something like this: 2010-02-26-000000-cron.log I don't really care about the format, as long as it has a timestamp of some kind. Thanks in advance.

    Read the article

  • Bonjour/mDNS Broadcast across subnets

    - by Matthew Savage
    I have just setup a new OSX Server in our office and verified that everything is working fine over our wired network (192.168.126.0/24). The problem that I am having is that our clients (Mac Laptops) are mainly connected via Wireless, which are running on a different subnet (192.168.1.0/24), and the mDNS Broadcast isn't reaching this subnet. The network configuration is somewhat foreign to myself (I don't manage the network in this location, but as of just recently the servers), however I don't believe there is any firewalls or routing rules between the two subnets which might cause the traffic to be rejected. I'm wondering if this is simply the mDNS broadcast not able to broadcast over the two different subnets (I'm still reading up on broadcast to understand it more) or there is something else which I might be able to try.

    Read the article

  • Autoscale Rackspace Cloud, Scalr or DIY?

    - by Andre Jay Marcelo-Tanner
    I'm looking into creating a setup on Rackspace Cloud that will allow me to autoscale my webservers (no db) on demand. Preferably using something like response time. I've read into configuration tools like Puppet/Chef, but I'm thinking I can just launch from prepared server images that are ready to go. Is there any tool out there already that can monitor my existing node response times and then launch or scale up new ones based upon certain variables like average X load over Y time? I see there are commercial offerings like Scalr, Rightscale, but how would I do this myself?

    Read the article

  • AutoCAD 11 and network file shares

    - by gravyface
    Small network of perhaps half a dozen engineers, currently working on local copies of AutoCAD project files, which are then copied back up to file server (2008 Standard, 1-2 year old Dell server hardware, RAID 5 SAS disks (10k? not positive)) at end of day. To me, this sounds horribly inefficient and error-prone, however, I've been told that "AutoCAD and network files = bad idea" and this is gospel. The network is currently 10/100 (perhaps this is the reason for the "gospel") but all the workstations are within 2 years old and have GbE NICs so an upgrade of the core switch is long overdue. However, I know certain applications don't like network access, at all, and any sign of latency or disruption brings the whole thing crashing down. Anyone care to chime in?

    Read the article

  • Is there a way to launch a command within a proper zsh shell ?

    - by Wam
    I'm not really clear with my question here, let me rephrase it : I've setup a launch_workspace.sh to launch directly tmux with 5 different commands loaded. Here is my current content : #!/bin/sh tmux new-session -d -s scube -n 'vim' "vim" tmux new-window -t scube:2 -n 'server' "$SHELL -c 'script/rails server'" tmux new-window -t scube:3 -n 'yard' "$SHELL -c 'bundle exec yard server --gems'" tmux new-window -t scube:4 -n 'spork' "$SHELL -c 'bundle exec guard'" tmux new-window -t scube:5 -n 'autotest' "$SHELL -c 'bundle exec autotest'" tmux new-window -t scube:5 -n 'shell' "$SHELL" tmux select-window -t scube:1 tmux -2 attach-session -t scube Problem is : my zsh ($SHELL beeing zsh) launches said commands, but when I Ctrl+C any of these, it closes the full zsh (hence my tmux window) and not just return to a proper zsh prompt. Is there a way to have said behavior, to launch zsh with a command and return to a zsh prompt when the command fails ? Cheers

    Read the article

  • Rackspace Cloud Servers in Europe?

    - by mit
    We have setup a cloud virtual server at rackspace in the US, but we use it from Europe. I found out I am not quite happy with the response time. Of course I knew that there would be some latency. But I am not sure if it is the overseas latency (ping is 120ms) or also the minimal resources. It is the smallest machine, 256 MB, 10 GB, running a Mediawiki on Ubuntu 10.04 64 bit. The Instance lives in the rackspace ORD1 datacenter. As soon as they have opened their new facilities in the UK we plan moving the incstance there. But we are planing more machines already. The pricing is quite attractive. I don't really want to do some measuring and benchmarking and this stuff, so I am asking just for your opinions and it would be nice to hear what you can tell from your experience. Maybe someone who uses such small instances in the US. And what can we really expect if we upgrade to more resources.

    Read the article

  • Windows 8 Will be Here Tomorrow; but Should Silverlight be Gone Today?

    - by andrewbrust
    The software industry lives within an interesting paradox. IT in the enterprise moves slowly and cautiously, upgrading only when safe and necessary.  IT interests intentionally live in the past.  On the other hand, developers, and Independent Software Vendors (ISVs) not only want to use the latest and greatest technologies, but this constituency prides itself on gauging tech’s future, and basing its present-day strategy upon it.  Normally, we as an industry manage this paradox with a shrug of the shoulder and musings along the lines of “it takes all kinds.”  Different subcultures have different tendencies.  So be it. Microsoft, with its Windows operating system (OS), can’t take such a laissez-faire view of the world though.  Redmond relies on IT to deploy Windows and (at the very least) influence its procurement, but it also relies on developers to build software for Windows, especially software that has a dependency on features in new versions of the OS.  It must indulge and nourish developers’ fetish for an early birthing of the next generation of software, even as it acknowledges the IT reality that the next wave will arrive on-schedule in Redmond and will travel very slowly to end users. With the move to Windows 8, and the corresponding shift in application development models, this paradox is certainly in place. On the one hand, the next version of Windows is widely expected sometime in 2012, and its full-scale deployment will likely push into 2014 or even later.  Meanwhile, there’s a technology that runs on today’s Windows 7, will continue to run in the desktop mode of Windows 8 (the next version’s codename), and provides absolutely the best architectural bridge to the Windows 8 Metro-style application development stack.  That technology is Silverlight.  And given what we now know about Windows 8, one might think, as I do, that Microsoft ecosystem developers should be flocking to it. But because developers are trying to get a jump on the future, and since many of them believe the impending v5.0 release of Silverlight will be the technology’s last, not everyone is flocking to it; in fact some are fleeing from it.  Is this sensible?  Is it not unprecedented?  What options does it lead to?  What’s the right way to think about the situation? Is v5.0 really the last major version of the technology called Silverlight?  We don’t know.  But Scott Guthrie, the “father” and champion of the technology, left the Developer Division of Microsoft months ago to work on the Windows Azure team, and he took his people with him.  John Papa, who was a very influential Redmond-based evangelist for Silverlight (and is a Visual Studio Magazine author), left Microsoft completely.  About a year ago, when initial suspicion of Silverlight’s demise reached significant magnitude, Papa interviewed Guthrie on video and their discussion served to dispel developers’ fears; but now they’ve moved on. So read into that what you will and let’s suppose, for the sake of argument, speculation that Silverlight’s days of major revision and iteration are over now is correct.  Let’s assume the shine and glimmer has dimmed.  Let’s assume that any Silverlight application written today, and that therefore any investment of financial and human resources made in Silverlight development today, is destined for rework and extra investment in a few years, if the application’s platform needs to stay current. Is this really so different from any technology investment we make?  Every framework, language, runtime and operating system is subject to change, to improvement, to flux and, yes, to obsolescence.  What differs from project to project, is how near-term that obsolescence is and how disruptive the change will be.  The shift from .NET 1.1. to 2.0 was incremental.  Some of the further changes were too.  But the switch from Windows Forms to WPF was major, and the change from ASP.NET Web Services (asmx) to Windows Communication Foundation (WCF) was downright fundamental. Meanwhile, the transition to the .NET development model for Windows 8 Metro-style applications is actually quite gentle.  The finer points of this subject are covered nicely in Magenic’s excellent white paper “Assessing the Windows 8 Development Platform.” As the authors of that paper (including Rocky Lhotka)  point out, Silverlight code won’t just “port” to Windows 8.  And, no, Silverlight user interfaces won’t either; Metro always supports XAML, but that relationship is not commutative.  But the concepts, the syntax, the architecture and developers’ skills map from Silverlight to Windows 8 Metro and the Windows Runtime (WinRT) very nicely.  That’s not a coincidence.  It’s not an accident.  This is a protected transition.  It’s not a slap in the face. There are few things that are unnerving about this transition, which make it seem markedly different from others: The assumed end of the road for Silverlight is something many think they can see.  Instead of being ignorant of the technology’s expiration date, we believe we know it.  If ignorance is bliss, it would seem our situation lacks it. The new technology involving WinRT and Metro involves a name change from Silverlight. .NET, which underlies both Silverlight and the XAML approach to WinRT development, has just about reached 10 years of age.  That’s equivalent to 80 in human years, or so many fear. My take is that the combination of these three factors has contributed to what for many is a psychologically compelling case that Silverlight should be abandoned today and HTML 5 (the agnostic kind, not the Windows RT variety) should be embraced in its stead.  I understand the logic behind that.  I appreciate the preemptive, proactive, vigilant conscientiousness involved in its calculus.  But for a great many scenarios, I don’t agree with it.  HTML 5 clients, no matter how impressive their interactivity and the emulation of native application interfaces they present may be, are still second-class clients.  They are getting better, especially when hardware acceleration and fast processors are involved.  But they still lag.  They still feel like they’re emulating something, like they’re prototypes, like they’re not comfortable in their own skins.  They are based on compromise, and they feel compromised too. HTML 5/JavaScript development tools are getting better, and will get better still, but they are not as productive as tools for other environments, like Flash, like Silverlight or even more primitive tooling for iOS or Android.  HTML’s roots as a document markup language, rather than an application interface, create a disconnect that impedes productivity.  I do not necessarily think that problem is insurmountable, but it’s here today. If you’re building line-of-business applications, you need a first-class client and you need productivity.  Lack of productivity increases your costs and worsens your backlog.  A second class client will erode user satisfaction, which is never good.  Worse yet, this erosion will be inconspicuous, rather than easily identified and diagnosed, because the inferiority of an HTML 5 client over a native one is hard to identify and, notably, doing so at this juncture in the industry is unpopular.  Why would you fault a technology that everyone believes is revolutionary?  Instead, user disenchantment will remain latent and yet will add to the malaise caused by slower development. If you’re an ISV and you’re coveting the reach of running multi-platform, it’s a different story.  You’ve likely wanted to move to HTML 5 already, and the uncertainty around Silverlight may be the only remaining momentum or pretext you need to make the shift.  You’re deploying many more copies of your application than a line-of-business developer is anyway; this makes the economic hit from lower productivity less impactful, and the wider potential installed base might even make it profitable. But no matter who you are, it’s important to take stock of the situation and do it accurately.  Continued, but merely incremental changes in a development model lead to conservatism and general lack of innovation in the underlying platform.  Periods of stability and equilibrium are necessary, but permanence in that equilibrium leads to loss of platform relevance, market share and utility.  Arguably, that’s already happened to Windows.  The change Windows 8 brings is necessary and overdue.  The marked changes in using .NET if we’re to build applications for the new OS are inevitable.  We will ultimately benefit from the change, and what we can reasonably hope for in the interim is a migration path for our code and skills that is navigable, logical and conceptually comfortable. That path takes us to a place called WinRT, rather than a place called Silverlight.  But considering everything that is changing for the good, the number of disruptive changes is impressively minimal.  The name may be changing, and there may even be some significance to that in terms of Microsoft’s internal management of products and technologies.  But as the consumer, you should care about the ingredients, not the name.  Turkish coffee and Greek coffee are much the same. Although you’ll find plenty of interested parties who will find the names significant, drinkers of the beverage should enjoy either one.  It’s all coffee, it’s all sweet, and you can tell your fortune from the grounds that are left at the end.  Back on the software side, it’s all XAML, and C# or VB .NET, and you can make your fortune from the product that comes out at the end.  Coffee drinkers wouldn’t switch to tea.  Why should XAML developers switch to HTML?

    Read the article

  • Django Dying on Shared Hosting Environment (Too Many MySQL Connections)

    - by Tom
    I've had a Django site up and running on HostGator (client requirement), following these instructions, for a few weeks now. I had seen two error emails about pages dying with (1040: Too many MySQL connections) but had never been able to recreate the problem. As of today, the site is completely unresponsive and all pages, even the static files, are dying with that error. Two questions: What can I do to fix this (other than caching more stuff)? Why would static files be dying like that? I can request them directly without a problem, so how are they getting run through Django? The shared hosting setup doesn't allow for a <Location> block, but there's a flag in the rewrite rule that says only requests for files that don't exist in the filesystem should be processed. All of my static files exist on the system, though they are symbolically linked files if it matters.

    Read the article

  • (Mythbuntu) After upgrading to XBMC 11, Mythbox now says "cannot import name decodeLongLong"?

    - by Jozxyqk
    The vital stats: Mythbuntu 10.10 (maverick) XBMC 11 -- from team-xbmc maverick ppa Mythtv 0.23.1+fix (the standard version for mythbuntu 10.10) Mythbox version 1.1.0 OK, so, I was happily going along running XBMC 10.1 on my HTPC setup, and I saw everyone was all excited about XBMC 11, and it was available from the PPA. Now, when I go into mythbox and select a recording, it shows me the following error message box: Error: oninit cannot import name decodeLongLong This only seems to affect its ability to show a thumbnail picture for the recording. When I start playing the recording, everything pretty much goes fine. What does this error message mean? Is there any way I can fix it? Is there a library I am missing or something?

    Read the article

  • rsync ssh not working in crontab after reboot

    - by kabeer
    I was using a script to perform rsync in sudo crontab. The script does a 2-way rsync (from serverA to serverB and reverse). the rsync uses ssh to connect between servers. After i reboot both the server machines, the rsync is not working in sudo crontab. I also setup a new cronjob and it fails, The error is: rsync error: unexplained error (code 255) at io.c(600) [sender=3.0.6] rsync: connection unexpectedly closed (0 bytes received so far) [receiver] However, when run from terminal, the rync script works as expected without issues. please help. looks like issue with ssh. however, i am able to ssh into either servers withoiut issues.

    Read the article

  • win xp client, win 7 host, only xp drivers

    - by brett
    I have a win 7 64 bit box which has xp on it in a vmware module and also the win7 version. I can use my old usb wifi card under virtual xp as i have the wifi drivers, but apparently the manufacturing company never made any further drivers, nor did it release the source code. Is it possible to get networking between the client and the host, so that my host can browse etc? I thought the microsoft loopback adapter might be the answer but ever example i can find of it's use describes a setup where the host is connected fine and needs to route data to the client as well.

    Read the article

  • Mirror virtualized development environment

    - by David Casillas
    I work alone in some iOS projects in a local environment. I have been thinking in a way to be able to share my development environment between my Mac Mini and my MacBook. I mostly work at home in the Mini but sometimes I need to do a demo or work outside and I would like to have the development environment mirrored in both. I have think in using a virtual machine (via VirtualBox) with just my development tools instaled. Then I could synchronize that VM with some software between both computers so I will always have the exact environment no matter what computer I use. Is there any good reason not do do this way? I have not used Virtualization to much so I have no background on the subject. My basic setup will be: Mac Mini: i7 dual Core, 8Gb. OSX Mountain Lion Host OS: MacBook: 2.4 Core 2 Duo. 4Gb. OSX Lion Host OS. Virtual Box with Mountain Lion guest OS in both machines. XCode5, Simulator.

    Read the article

  • indirect rendering issue on 12.04, using ati driver

    - by lurscher
    I have ubuntu 12.04 64-bit system, when i run glxinfo i see some strange error about indirect rendering and failed to load some lib32/dri/swrast_dri libraries. Any idea what is going on? please let me know if i can enhance the relevant information provided in this question $ LIBGL_DEBUG=verbose glxinfo name of display: :0 libGL: screen 0 does not appear to be DRI2 capable libGL: OpenDriver: trying /usr/lib32/dri/tls/swrast_dri.so libGL: OpenDriver: trying /usr/lib32/dri/swrast_dri.so libGL error: dlopen /usr/lib32/dri/swrast_dri.so failed (/usr/lib32/dri/swrast_dri.so: cannot open shared object file: No such file or directory) libGL: OpenDriver: trying /usr/lib/dri/tls/swrast_dri.so libGL: OpenDriver: trying /usr/lib/dri/swrast_dri.so libGL error: dlopen /usr/lib/dri/swrast_dri.so failed (/usr/lib/dri/swrast_dri.so: cannot open shared object file: No such file or directory) libGL error: unable to load driver: swrast_dri.so libGL error: reverting to indirect rendering display: :0 screen: 0 direct rendering: No (If you want to find out why, try setting LIBGL_DEBUG=verbose) server glx vendor string: ATI server glx version string: 1.4 server glx extensions: GLX_ARB_multisample, GLX_EXT_import_context, GLX_EXT_texture_from_pixmap, GLX_EXT_visual_info, GLX_EXT_visual_rating, GLX_OML_swap_method, GLX_SGI_make_current_read, GLX_SGI_swap_control, GLX_SGIS_multisample, GLX_SGIX_fbconfig, GLX_SGIX_pbuffer, GLX_SGIX_visual_select_group client glx vendor string: Mesa Project and SGI client glx version string: 1.4 client glx extensions: GLX_ARB_create_context, GLX_ARB_create_context_profile, GLX_ARB_get_proc_address, GLX_ARB_multisample, GLX_EXT_import_context, GLX_EXT_visual_info, GLX_EXT_visual_rating, GLX_EXT_framebuffer_sRGB, GLX_EXT_create_context_es2_profile, GLX_MESA_copy_sub_buffer, GLX_MESA_multithread_makecurrent, GLX_MESA_swap_control, GLX_OML_swap_method, GLX_OML_sync_control, GLX_SGI_make_current_read, GLX_SGI_swap_control, GLX_SGI_video_sync, GLX_SGIS_multisample, GLX_SGIX_fbconfig, GLX_SGIX_pbuffer, GLX_SGIX_visual_select_group, GLX_EXT_texture_from_pixmap, GLX_INTEL_swap_event GLX version: 1.4 GLX extensions: GLX_ARB_get_proc_address, GLX_ARB_multisample, GLX_EXT_import_context, GLX_EXT_visual_info, GLX_EXT_visual_rating, GLX_MESA_multithread_makecurrent, GLX_OML_swap_method, GLX_SGI_make_current_read, GLX_SGI_swap_control, GLX_SGIS_multisample, GLX_SGIX_fbconfig, GLX_SGIX_pbuffer, GLX_SGIX_visual_select_group, GLX_EXT_texture_from_pixmap OpenGL vendor string: ATI Technologies Inc. OpenGL renderer string: AMD Radeon HD 6800 Series OpenGL version string: 1.4 (2.1 (4.2.11762 Compatibility Profile Context)) OpenGL extensions: GL_ARB_depth_texture, GL_ARB_draw_buffers, GL_ARB_fragment_program, GL_ARB_fragment_program_shadow, GL_ARB_framebuffer_object, GL_ARB_imaging, GL_ARB_multisample, GL_ARB_multitexture, GL_ARB_occlusion_query, GL_ARB_point_parameters, GL_ARB_point_sprite, GL_ARB_shadow, GL_ARB_shadow_ambient, GL_ARB_texture_border_clamp, GL_ARB_texture_compression, GL_ARB_texture_cube_map, GL_ARB_texture_env_add, GL_ARB_texture_env_combine, GL_ARB_texture_env_crossbar, GL_ARB_texture_env_dot3, GL_ARB_texture_mirrored_repeat, GL_ARB_texture_non_power_of_two, GL_ARB_texture_rectangle, GL_ARB_transpose_matrix, GL_ARB_vertex_program, GL_ARB_window_pos, GL_EXT_abgr, GL_EXT_bgra, GL_EXT_blend_color, GL_EXT_blend_equation_separate, GL_EXT_blend_func_separate, GL_EXT_blend_minmax, GL_EXT_blend_subtract, GL_EXT_copy_texture, GL_EXT_draw_range_elements, GL_EXT_fog_coord, GL_EXT_framebuffer_blit, GL_EXT_framebuffer_multisample, GL_EXT_framebuffer_object, GL_EXT_multi_draw_arrays, GL_EXT_packed_pixels, GL_EXT_point_parameters, GL_EXT_rescale_normal, GL_EXT_secondary_color, GL_EXT_separate_specular_color, GL_EXT_shadow_funcs, GL_EXT_stencil_wrap, GL_EXT_subtexture, GL_EXT_texture3D, GL_EXT_texture_compression_s3tc, GL_EXT_texture_edge_clamp, GL_EXT_texture_env_add, GL_EXT_texture_env_combine, GL_EXT_texture_env_dot3, GL_EXT_texture_lod, GL_EXT_texture_lod_bias, GL_EXT_texture_mirror_clamp, GL_EXT_texture_object, GL_EXT_texture_rectangle, GL_EXT_vertex_array, GL_ATI_draw_buffers, GL_ATI_texture_env_combine3, GL_ATI_texture_mirror_once, GL_ATIX_texture_env_combine3, GL_IBM_texture_mirrored_repeat, GL_INGR_blend_func_separate, GL_NV_texture_rectangle, GL_SGIS_generate_mipmap, GL_SGIS_texture_border_clamp, GL_SGIS_texture_edge_clamp, GL_SGIS_texture_lod, GL_SGIX_shadow_ambient, GL_SUN_multi_draw_arrays

    Read the article

  • Google+ Platform Office Hours for March 21, 2012: JavaScript and the REST APIs

    Google+ Platform Office Hours for March 21, 2012: JavaScript and the REST APIs It's a blast from the past. Here's the video from our office hours held on the 20th of March. In this session Jonathan and Wolff guided you through using the REST APIs with JavaScript. Get the source code: goo.gl Discuss this video on Google+: goo.gl 1:05 - How to use JSONP to access the REST APIs in JavaScript 2:30 - Setting up a new project in the API console 7:06 - The client libraries, what are they? 8:39 - Using the JavaScript client library to reimplement our example 13:27 - About OAuth and private resources 14:26 - Creating an OAuth client using the API console - The JavaScript client library discussion group - goo.gl 24:14 - Using the JavaScript client library and REST APIs from within a hangout - hangoutbots.blogspot.com Q&A 19 - Are you planning to change the +1 button back? 20:14 - Will this video be posted to YouTube? (Spoiler: the answer is yes) 20:43 - Do you read your issues list? - The Google+ platform issue tracker: goo.gl From: GoogleDevelopers Views: 2808 33 ratings Time: 27:03 More in Science & Technology

    Read the article

  • SCCM 2007 managing hosts in non trusted forest

    - by BoxerBucks
    I have an implementation of SCCM 2007 in forest "A" that manages hosts in that Windows 2008 forest. There is another forest/domain, "B", which I have no trust with that I need to manage hosts in as well. I don't need to push out clients from the SCCM console, I am going to install them manually. I just need the hosts in domain "B" to connect back to the forest/domain "A" for management purposes. To date, I have not added any AD objects to domain "B" for hosts to query for site, SLP or management point info. I am installing the hosts with the command line: ccmsetup.exe /mp:SCCM_Server /site:mysite SCCM_Server = FQDN of my sccm server (which is resolvable by the client) There are no ACL's between the two servers. From the logs, I can see the install complete and the client tries to query the local AD for the site info for "mysite" but it can't find it and it stops and never connects. Can anyone give me some direction as to how this should be setup?

    Read the article

  • One external IP 2 servers

    - by Stanley
    Currently there is one external IP pointing to a Window Web Server. Now wish to add a Linux web server. Wish to know if the following setup is ok : 119.xxx.xxx.xxx points to Window Web Server 119.xxx.xxx.xxx/Linux_Server points to the new additional Linux Server. If the above scheme is ok, then how should it be done. (In terms of where the router should be placed and configured etc). If the above scheme is unusual or not workable please suggest best practice scheme. Hope somebody knowledgable could help ...

    Read the article

  • How to get better video quality in Lync?

    - by sinned
    I want to use a ConferenceCam from Logitech to stream talks live via Lync. When I view the RAW webcam image via VLC, the quality is very good (but the latency is high because of buffering). However, when I stream it using Lync, the video gets blurry. Is there a way to ensure QoS in Lync or otherwise improve the video quality to (near-)native? I would rather have some dropped frames than a lower resolution where I can't read the slides. In my setup, I use Lync with an Office365-E3 contract, so I have no Lync-Server in my network. I thought about replacing Lync completely with VLC, but I first want to try Lync because VLC will probably cause firewall issues. Also, I haven't looked up the VLC parameters for less buffering, faster encoding, a bit lower resolution (natively it's more than HD) and streaming.

    Read the article

  • Where to learn how to replicate an Excel template?

    - by Rosarch
    This Excel template is really cool. There are a lot of things in it I don't know how to do, such as: Having header rows that "stick" to the top even when you scroll down Slider on the first page changes where the chart pulls its data from Functions seem to be referring to named ranges in tables, like =SUM([nov]). Where do those names come from? Clicking "back to overview" on the "Budget" page returns you to the "Dashboard" page The number under "starting balance" of the top right corner of "Budget" changes when you change cell C5 On "Budget", each cell in the first column of each table has a drop-down menu for text, which seems to come from the "Setup" page The background isn't just plain white, but when I try to format paint it onto a new sheet, nothing happens If you know how any of these effects are achieved, I'm definitely curious. But I guess the main point of my question is where I can go to answer these questions for myself. Are templates explained anywhere?

    Read the article

  • How to customize Windows Failover clustering to trigger on failure of custom window service?

    - by melaos
    i'm a total newbie on windows failover clustering. and what i want to do now is to setup the FC (failover clustering) on two win 2008 R2 server. and right now i have my custom window service running on both machine. But they cannot run concurrently as it will mess up the DB, thus i just want one to be available at all time (high availability). so i'm wondering if there's any way to set the failover policy to include this custom window service that i've installed on these machines so that if this service goes down or die, then it will automatically trigger the failover to the second node. is this possible? or must it be done programatically? and if so what is the best way? thanks ~m

    Read the article

  • TrueCrypt: Open volume without mounting

    - by Totomobile
    I have a corrupt TrueCrypt volume. When I try to mount it, the password is fine but I get an error: hdiutil attach failed no mountable file systems I read a post that says you can try to recover the volume, but I have to open it first to try to recover it. I just need to open it without TrueCrypt trying to mount it too, so I can use that partition in a data recovery program. Also it's just an image file volume. I am using the Mac version, and I have setup an alias for the truecrypt shell command, but I'm not sure how to enter the syntax! Please help. thank you! T

    Read the article

  • Complex knowledge management system with CRM..written internally

    - by JonH
    We've all heard of salesforce and sugarcrm and the likes of systems like this. Unfortunately at my workplace we have been asked to write a similiar system (rather then license or purchase). Basically the database is fairly large. Think of modules such as: Corporate groups, customers, programs, projects, sub projects, and issue management. In simple terms a corporate group has one to many customers. A program has one or more projects. A project has one or more sub projects. And an issue can be created on many sub projects. Of course the system is a bit more complex but instead of listing every single module I think its best to keep it simple. In any event, the system in its current state has only two resources to be working on it (basically we have to do it all: CSS, database, jquery, asp.net and C#). We've started off well by defining the UI master and footer pages that way we can reuse those across all of our pages. Now comes the hard part. The system will have about 4k end users with say 5-10% being concurrent users. We are wondering if it makes sense to cache our database data (For say 5-10 minutes) rather then continously hit our database. The reason being is some of these pages may have 5-10 search filters associated with the page. Imagine every time a selection is made from a search box how many database hits. Also some of these search fields cascade so selecting for instance an initial drop down may cascade several drop down boxes under them. Is it wrong to cache because I am not finding too many articles on whether it is a good idea or not. Remember the system is similiar to say a CRM system where we manage our various customers, projects, sub projects, issues, etc.

    Read the article

  • TrueCrypt: Open volume without mounting

    - by Totomobile
    I have a corrupt TrueCrypt volume. When I try to mount it, the password is fine but I get an error: hdiutil attach failed no mountable file systems I read a post that says you can try to recover the volume, but I have to open it first to try to recover it. I just need to open it without TrueCrypt trying to mount it too, so I can use that partition in a data recovery program. Also it's just an image file volume. I am using the Mac version, and I have setup an alias for the truecrypt shell command, but I'm not sure how to enter the syntax! Please help. thank you! T

    Read the article

  • Add a Search Box to the Drop-Down Tab List in Firefox

    - by Asian Angel
    Do you have a lot of tabs open no matter what time of day it is and find sorting through the tabs list frustrating? Then get control back with the List All Tabs Menu extension for Firefox. Before If you have a large number of tabs open using the “Tab List Menu” can start to become a little awkward. You can use your mouse’s middle button to scroll through the list or the tiny arrow button at the bottom but there needs to be a better way to deal with this. After Once you have installed the extension you will notice two differences in the “Tab List Menu”. There will be a search box available and a nice scrollbar for those really long lists. A closer look at the search box and scrollbar setup… Depending on your style you can use the scrollbar to look for a particular page or enter a search term and watch that list become extremely manageable. A closer look at our much shorter list after conducting a search. Definitely not hard to find what we were looking for at all. Conclusion If you are someone who has lots of tabs open at once throughout the day then the List All Tabs Menu extension might be the perfect tool to help you sort and manage those tabs. Links Download the List All Tabs Menu extension (Mozilla Add-ons) Similar Articles Productive Geek Tips Organize Your Firefox Search Engines Into FoldersAdd Search Forms to the Firefox Search BarGain Access to a Search Box in Google ChromeWhy Doesn’t Tab Work for Drop-down Controls in Firefox on OS X?Quick Tip: Spell Check Firefox Text Input Fields TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 PCmover Professional Stormpulse provides slick, real time weather data Geek Parents – Did you try Parental Controls in Windows 7? Change DNS servers on the fly with DNS Jumper Live PDF Searches PDF Files and Ebooks Converting Mp4 to Mp3 Easily Use Quick Translator to Translate Text in 50 Languages (Firefox)

    Read the article

< Previous Page | 932 933 934 935 936 937 938 939 940 941 942 943  | Next Page >