Search Results

Search found 28707 results on 1149 pages for 'writing your own'.

Page 611/1149 | < Previous Page | 607 608 609 610 611 612 613 614 615 616 617 618  | Next Page >

  • Someone using my website for Email and significant increase in spam

    - by Joy
    Let me give you the background in context so that you know the full story. Last summer my web guy (he put my website together) got in a fight with someone who attempted to register on my site using the name of my company as part of his user name. I was not aware of this at all until it had escalated dramatically. I don't know why my web guy was so unprofessional in his response to this person. I really don't know him - met him via SCORE and have never met in person. He is a vendor. Anyway, this guy who got into it with my web guy then threatened to do all he could to hurt my business and said he was internet savvy, etc. So, nothing seemed to happen for a while then not long ago this guy attempted to send me a friend request on Linkedin. After his behavior I declined it. Shortly afterwards I began seeing a dramatic increase in spammers posting comments on the blog part of my site. Just lately I have been receiving Emails from a variety of names but all with the "@___" that I own - for my business. I had additional security added so now they have to register in order to comment on my blog and I am seeing a lot of registration attempts from the same (and similar) IP addresses with bogus names and weird Email addresses being blocked. So, it is not creating more work as it is all automatic. The Email addresses are of more concern. Is there a way to identify a person through an IP address or a place to report the behavior or the Email usage? This guy lives in South Carolina so he is not overseas. Any help/advice you can provide will be greatly appreciated. Thanks Joy

    Read the article

  • Windows Server 2008 R2 Software Deployment on Active Directory - Schema Issue

    - by weedave
    We have two servers, one running Windows Server 2003 SP2 and one running Windows Server 2008 R2. Both servers have their own versions of Group Policy Management (1.0.2 on 2003 and 6.0.0.1 on 2008). We are wanting to migrate everything over to the newer 2008 server, including software deployment. However, when I try to add a new software package using a .msi file, I get the following error: "The schema for the software installation data in the Active Directory does not match the required schema." I have tried two separate software packages and get the same error on the 2008 server. However, when I do the same on the 2003 server, it adds the software package without any problems. The .msi files I am using are up-to-date - one is the most recent version of Google Chrome. Is this problem caused by the different versions of the OS, or the Group Policy Management program? How do we "upgrade" our Active Directory to allow software deployment on the 2008 server? Thanks.

    Read the article

  • Losing WLAN connections but maintaining internet connections on WIndows 7 Workgroup

    - by Di
    I have 4 computers all running Windows 7 networked in a Work group through Billion 7404vgp-m wireless router.All drivers and firmware for wireless adapters and router are up to date. Windows Firewall and Defender disabled.Disconnected ipv6. Running Nod 32 anti virus software. All have own static IP address 192.XXX.X.XXX. When I Reset the router all computers have Internet and LAN access for about 1 hour and then they will lose the LAN connection but maintain Internet connection. Resetting wireless adapters or restarting computers does nothing to fix this but resetting router will. What is causing this and how do I fix it. Thanks Di

    Read the article

  • Chrome shows "The site's security certificate is not trusted" error

    - by Emerald214
    From this morning I get this error whenever I access Google Docs and some websites. My system datetime is correct and I checked "Automatically from the Internet". My BIOS is OK. I cleared everything (cache, cookie, private data) in Chrome and restarted OS but nothing changes. How to fix it? Firefox works but Chrome has that problem. The site's security certificate is not trusted! You attempted to reach docs.google.com, but the server presented a certificate issued by an entity that is not trusted by your computer's operating system. This may mean that the server has generated its own security credentials, which Google Chrome cannot rely on for identity information, or an attacker may be trying to intercept your communications. You cannot proceed because the website operator has requested heightened security for this domain.

    Read the article

  • SQL SERVER – OLEDB – Link Server – Wait Type – Day 23 of 28

    - by pinaldave
    When I decided to start writing about this wait type, the very first question that came to my mind was, “What does ‘OLEDB’ stand for?” A quick search on Wikipedia tells me that OLEDB means Object Linking and Embedding Database. (How many of you knew this?) Anyway, I found it very interesting that this wait type was in one of the top 10 wait types in many of the systems I have come across in my performance tuning experience. Books On-Line: ????OLEDB occurs when SQL Server calls the SQL Server Native Client OLE DB Provider. This wait type is not used for synchronization. Instead, it indicates the duration of calls to the OLE DB provider. OLEDB Explanation: This wait type primarily happens when Link Server or Remove Query has been executed. The most common case wherein this wait type is visible is during the execution of Linked Server. When SQL Server is retrieving data from the remote server, it uses OLEDB API to retrieve the data. It is possible that the remote system is not quick enough or the connection between them is not fast enough, leading SQL Server to wait for the result’s return from the remote (or external) server. This is the time OLEDB wait type occurs. Reducing OLEDB wait: Check the Link Server configuration. Checking Disk-Related Perfmon Counters Average Disk sec/Read (Consistent higher value than 4-8 millisecond is not good) Average Disk sec/Write (Consistent higher value than 4-8 millisecond is not good) Average Disk Read/Write Queue Length (Consistent higher value than benchmark is not good) At this point in time, I am not able to think of any more ways on reducing this wait type. Do you have any opinion about this subject? Please share it here and I will share your comment with the rest of the Community, and of course, with due credit unto you. Please read all the post in the Wait Types and Queue series. Note: The information presented here is from my experience and there is no way that I claim it to be accurate. I suggest reading Book OnLine for further clarification. All the discussion of Wait Stats in this blog is generic and varies from system to system. It is recommended that you test this on a development server before implementing it to a production server. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQL Wait Stats, SQL Wait Types, T SQL, Technology

    Read the article

  • samsung HMX-H100P camcorder and video encoding with mencoder

    - by jskg
    Hi everyone, my background is totally not related to video stuff so pardon my newbie style. I own a samsung HMX-H100P camcorder and I'm trying to encode videos to be uploaded to Youtube and Vimeo. First problem: videos generated by the camera with no processing appear like this: http://www.youtube.com/watch?v=AANbl_DTuzE when I play them with Totem(Linux) or VideoLan. Second problem: When I try to encode the videos produced by the camera using mencoder I get the video at the resolution I chose but those ugly lines and lagging are still present. Here's the command I use: mencoder $inputFile -aspect 16:9 -of lavf -lavfopts format=psp -oac lavc -ovc lavc -lavcopts aglobal=1:vglobal=1:coder=0:vcodec=libx264:acodec=libfaac:vbitrate=4500:abitrate=128 -vf scale=1280:720 -ofps 25000/1001 -o $outputFile Any ideas? Thanks in advance

    Read the article

  • security issue of Linux sudo command?

    - by George2
    Hello everyone, 1. I am using Red Hat Enterprise 5 Linux box. I find if a user is in /etc/sudoers file, then if the user run command with sudo, the user will run this command with root privilege (without knowing root password, the user runs sudo only need to input the user's own password in order to run a command with sudo). Is that correct understanding? 2. If yes, then is it a security hole? Since users other than root could run with root privilege? thanks in advance, George

    Read the article

  • Cannot get git working

    - by Devin Dixon
    I'm trying to install my own git server with these instructions. http://cisight.com/how-to-setup-git-server-using-gitolite-in-ubuntu-11-10-oneiric/ But I am get stuck at this point. git clone --verbose [email protected]:testing.git Cloning into 'testing'... Permission denied (publickey). fatal: The remote end hung up unexpectedly And I think it has something to do with this: gitolite@ip-xxxx:~$ gl-setup tmp/john.pub key_read: uudecode Aklkdfgkldkgldkgldkgfdlkgldkgdlfkgldkgldkgdlkgkfdnknbkdnbkdnbkdnbkfnbkdfnbkdnfbkdfnbdknbkdnbkfnbkdbnkdbnkdfnbkd [email protected] failed fprint failed I always get the fail and I think its preventing me from cloning repo.The repo is there along with gitolite-admin.git repo. The permissions are this: drwxr-x--- 8 gitolite gitolite 4096 Jun 6 16:29 gitolite-admin.git drwxr-x--- 7 gitolite gitolite 4096 Jun 6 16:29 testing.git So my question is what am I missing here?

    Read the article

  • send and recieve email with custom domain address, using gmail

    - by ahmad598
    i've registered a domain, and went to create a google apps account with that. but google told me they don't accept .ir domains, possibly trying to prevent us from building bombs (which completely makes sense), or simply lack of interest. anyway. so now i'm seeking for a way to use gmail, but with my own domain name, all without using google apps. i've added my domain to a free dnspark.net account, and it doesn't have mail forwarding, but i can set MX records. is there anyway i could accomplish my task? or is it completely impossible?

    Read the article

  • VirtualBox - split partitioned VDI into separate VDIs

    - by mathematical.coffee
    I'm very new to VirtualBox. I set up an Arch Linux VM and a Ubuntu VM (Ubuntu host), both sharing the same .vdi like so (I had in my mind a dual-boot situation): VDI file (25GB) |- /dev/sda1: 5GB (Arch Linux) |- /dev/sda2: [Ubuntu] |- /dev/sda5 (swap, 1GB) |- /dev/sda6 Ubuntu /, 9GB |- /dev/sda7 Ubuntu /home, 10GB I've now realised that I don't want a dual-boot-type setup, I'd rather boot each machine independently (my initial thought was to share /home between Ubunto and Arch). So, my question: Can I split /dev/sda1 and /dev/sda2 each to their own .vdi files so I can use them as completely separate machines? I'd rather not have to re-install either Arch (because it took me ages to work it out!) or Ubuntu (because I've already done a few GB of updates and don't want to redo them). I haven't been able to find anything about this - most questions I see are about converting a .vdi to a partition on the host, or splitting a .vdi into multiple smaller files (that are not independent), or converting a partition on the host to a .vdi file. cheers.

    Read the article

  • One server, Two APC UPS on redundant power supplies : How to trigger shutdown ?

    - by Falken
    I have a server racked and its redundant power supplies plugged in two APC Smart-UPS 3000 XLM. Each UPS is connected to two different mains power sources. Two instances of apcupsd are running, each one connected to its own UPS. They can both detect when an UPS is on Battery, and each UPS can then trigger a shutdown on the server. Question is : How NOT to shutdown if ONLY ONE UPS runs out of battery ? Note : Smart-UPS 3000 XLM has a "Power Sync" Function that is able to connect to its peer and detect its status. But when I pulled the plug out of one of them, the Shutdown order was sent anyway. I'm thinking about modifying the shutdown scripts to check with "apcaccess" if the other ups is down. Any experience on this would be appreciated !

    Read the article

  • Open Source Security packages for Rails

    - by Edwin
    I'm currently creating a complete web application using Rails 3 to familiarize myself with its inner workings and to gain a better appreciation of a working web application's moving parts. (Plus, since I'm still working on my degree, I hope that it will give me a better idea of what's BS in my education requirements and which weaknesses/skills I should focus on.) The example application I'm working on is an ecommerce site, and I've already configured the backend, routes, controllers, and so on. As part of the application, I'd like to integrate a second layer of security on top of the one Rails already provides for user authentication. However, I've been unable to find any on Google, with the exception of OAuth - which, from my understanding, is meant to secure API calls. While I could roll my own secure authentication system, I'm only in my second year of college and recognize that A) I know little about security, and B) there are developers that know much more about security that are working on open-source projects. What are some actively developed open-source security packages or frameworks that can be easily added to Rails? Pros and cons are not necessary, as I can do the research myself. P.S. I'm not sure whether I posted this in the right SE site; please migrate to SO or Security if it is more appropriate there.

    Read the article

  • Hosting custom domains with IP address flexibility

    - by F21
    I am building a small service where users will be assigned a subdomain such as: myusername.myservice.com anotheruser.myservice.com I know that I can set up a wildcard vhost and using some configuration regex, serve the files like so: myusername.myservice.com ===> /var/www/myusername anotherusername.myservice.com ===> /var/www/anotherusername The problem is that I would like to allow users to alias their own domain names to their service. I understand that for the webserver, once the user adds the domain via my web interface, I can easily create a vhost for the domain in nginx and then refresh the webserver. The problem is that I would prefer to NOT let the users add an A record of my webserver's IP address as I would prefer to keep things flexible (when we upgrade our infrastructure to something more complex to scale). What is the best way to achieve this?

    Read the article

  • Moving from Ubuntu desktop to Ubuntu Server via SSH

    - by Daniel Elessedil Kjeserud
    So a little while ago I installed regular Ubuntu for a home server, but that gave me a lot of extra packages. What I should have done was to install Ubuntu Server, since I don't even own a screen to connect to it. Does anybody know of a way to convert my Ubuntu machine to a Ubuntu Server machine in one big swoop? It has to be done over SSH, since I don't have a screen to connect to it, like I said. It's currently running 9.10, about to be upgraded to 10.4.

    Read the article

  • 10 Windows Tweaking Myths Debunked

    - by Chris Hoffman
    Windows is big, complicated, and misunderstood. You’ll still stumble across bad advice from time to time when browsing the web. These Windows tweaking, performance, and system maintenance tips are mostly just useless, but some are actively harmful. Luckily, most of these myths have been stomped out on mainstream sites and forums. However, if you start searching the web, you’ll still find websites that recommend you do these things. Erase Cache Files Regularly to Speed Things Up You can free up disk space by running an application like CCleaner, another temporary-file-cleaning utility, or even the Windows Disk Cleanup tool. In some cases, you may even see an old computer speed up when you erase a large amount of useless files. However, running CCleaner or similar utilities every day to erase your browser’s cache won’t actually speed things up. It will slow down your web browsing as your web browser is forced to redownload the files all over again, and reconstruct the cache you regularly delete. If you’ve installed CCleaner or a similar program and run it every day with the default settings, you’re actually slowing down your web browsing. Consider at least preventing the program from wiping out your web browser cache. Enable ReadyBoost to Speed Up Modern PCs Windows still prompts you to enable ReadyBoost when you insert a USB stick or memory card. On modern computers, this is completely pointless — ReadyBoost won’t actually speed up your computer if you have at least 1 GB of RAM. If you have a very old computer with a tiny amount of RAM — think 512 MB — ReadyBoost may help a bit. Otherwise, don’t bother. Open the Disk Defragmenter and Manually Defragment On Windows 98, users had to manually open the defragmentation tool and run it, ensuring no other applications were using the hard drive while it did its work. Modern versions of Windows are capable of defragmenting your file system while other programs are using it, and they automatically defragment your disks for you. If you’re still opening the Disk Defragmenter every week and clicking the Defragment button, you don’t need to do this — Windows is doing it for you unless you’ve told it not to run on a schedule. Modern computers with solid-state drives don’t have to be defragmented at all. Disable Your Pagefile to Increase Performance When Windows runs out of empty space in RAM, it swaps out data from memory to a pagefile on your hard disk. If a computer doesn’t have much memory and it’s running slow, it’s probably moving data to the pagefile or reading data from it. Some Windows geeks seem to think that the pagefile is bad for system performance and disable it completely. The argument seems to be that Windows can’t be trusted to manage a pagefile and won’t use it intelligently, so the pagefile needs to be removed. As long as you have enough RAM, it’s true that you can get by without a pagefile. However, if you do have enough RAM, Windows will only use the pagefile rarely anyway. Tests have found that disabling the pagefile offers no performance benefit. Enable CPU Cores in MSConfig Some websites claim that Windows may not be using all of your CPU cores or that you can speed up your boot time by increasing the amount of cores used during boot. They direct you to the MSConfig application, where you can indeed select an option that appears to increase the amount of cores used. In reality, Windows always uses the maximum amount of processor cores your CPU has. (Technically, only one core is used at the beginning of the boot process, but the additional cores are quickly activated.) Leave this option unchecked. It’s just a debugging option that allows you to set a maximum number of cores, so it would be useful if you wanted to force Windows to only use a single core on a multi-core system — but all it can do is restrict the amount of cores used. Clean Your Prefetch To Increase Startup Speed Windows watches the programs you run and creates .pf files in its Prefetch folder for them. The Prefetch feature works as a sort of cache — when you open an application, Windows checks the Prefetch folder, looks at the application’s .pf file (if it exists), and uses that as a guide to start preloading data that the application will use. This helps your applications start faster. Some Windows geeks have misunderstood this feature. They believe that Windows loads these files at boot, so your boot time will slow down due to Windows preloading the data specified in the .pf files. They also argue you’ll build up useless files as you uninstall programs and .pf files will be left over. In reality, Windows only loads the data in these .pf files when you launch the associated application and only stores .pf files for the 128 most recently launched programs. If you were to regularly clean out the Prefetch folder, not only would programs take longer to open because they won’t be preloaded, Windows will have to waste time recreating all the .pf files. You could also modify the PrefetchParameters setting to disable Prefetch, but there’s no reason to do that. Let Windows manage Prefetch on its own. Disable QoS To Increase Network Bandwidth Quality of Service (QoS) is a feature that allows your computer to prioritize its traffic. For example, a time-critical application like Skype could choose to use QoS and prioritize its traffic over a file-downloading program so your voice conversation would work smoothly, even while you were downloading files. Some people incorrectly believe that QoS always reserves a certain amount of bandwidth and this bandwidth is unused until you disable it. This is untrue. In reality, 100% of bandwidth is normally available to all applications unless a program chooses to use QoS. Even if a program does choose to use QoS, the reserved space will be available to other programs unless the program is actively using it. No bandwidth is ever set aside and left empty. Set DisablePagingExecutive to Make Windows Faster The DisablePagingExecutive registry setting is set to 0 by default, which allows drivers and system code to be paged to the disk. When set to 1, drivers and system code will be forced to stay resident in memory. Once again, some people believe that Windows isn’t smart enough to manage the pagefile on its own and believe that changing this option will force Windows to keep important files in memory rather than stupidly paging them out. If you have more than enough memory, changing this won’t really do anything. If you have little memory, changing this setting may force Windows to push programs you’re using to the page file rather than push unused system files there — this would slow things down. This is an option that may be helpful for debugging in some situations, not a setting to change for more performance. Process Idle Tasks to Free Memory Windows does things, such as creating scheduled system restore points, when you step away from your computer. It waits until your computer is “idle” so it won’t slow your computer and waste your time while you’re using it. Running the “Rundll32.exe advapi32.dll,ProcessIdleTasks” command forces Windows to perform all of these tasks while you’re using the computer. This is completely pointless and won’t help free memory or anything like that — all you’re doing is forcing Windows to slow your computer down while you’re using it. This command only exists so benchmarking programs can force idle tasks to run before performing benchmarks, ensuring idle tasks don’t start running and interfere with the benchmark. Delay or Disable Windows Services There’s no real reason to disable Windows services anymore. There was a time when Windows was particularly heavy and computers had little memory — think Windows Vista and those “Vista Capable” PCs Microsoft was sued over. Modern versions of Windows like Windows 7 and 8 are lighter than Windows Vista and computers have more than enough memory, so you won’t see any improvements from disabling system services included with Windows. Some people argue for not disabling services, however — they recommend setting services from “Automatic” to “Automatic (Delayed Start)”. By default, the Delayed Start option just starts services two minutes after the last “Automatic” service starts. Setting services to Delayed Start won’t really speed up your boot time, as the services will still need to start — in fact, it may lengthen the time it takes to get a usable desktop as services will still be loading two minutes after booting. Most services can load in parallel, and loading the services as early as possible will result in a better experience. The “Delayed Start” feature is primarily useful for system administrators who need to ensure a specific service starts later than another service. If you ever find a guide that recommends you set a little-known registry setting to improve performance, take a closer look — the change is probably useless. Want to actually speed up your PC? Try disabling useless startup programs that run on boot, increasing your boot time and consuming memory in the background. This is a much better tip than doing any of the above, especially considering most Windows PCs come packed to the brim with bloatware.     

    Read the article

  • Upgrade PHP to 5.3 in Ubuntu Server 8.04 with Plesk 9.5

    - by alcuadrado
    I have a dedicated server with Ubuntu 8.04, and really need to upgrade php to 5.3 version in order to deploy a new version of the system. This version of php is the default one in ubuntu 10.04, so I considered upgrading the OS, but after trying that, I lost my plesk installation, which annoyed my client. I tried adding the dotdeb.org repositories, but don't know why, after running an apt-get upgrade, I get this error: # apt-get upgrade Reading package lists... Done Building dependency tree Reading state information... Done The following packages have been kept back: libapache2-mod-php5 php5 php5-cgi php5-cli php5-common php5-curl php5-gd php5-imap php5-mysql php5-sqlite php5-xsl 0 upgraded, 0 newly installed, 0 to remove and 11 not upgraded. Any idea why is this happenning? Or do you know any alternative method (except compiling my own binaries) to upgrade php or update ubuntu without loosing plesk? Thanks!

    Read the article

  • Postfix relay to multiple servers and multiple users

    - by Frankie
    I currently have postfix configured so that all users get relayed by the local machine with the exception of one user that gets relayed via gmail. To that extent I've added the following configuration: /etc/postfix/main.cf # default options to allow relay via gmail smtp_use_tls=yes smtp_sasl_auth_enable = yes smtp_tls_CAfile = /etc/ssl/certs/ca-bundle.crt smtp_sasl_security_options = noanonymous # map the relayhosts according to user sender_dependent_relayhost_maps = hash:/etc/postfix/relayhost_maps # keep a list of user and passwords smtp_sasl_password_maps = hash:/etc/postfix/sasl_passwd /etc/postfix/relayhost_maps user-one@localhost [smtp.gmail.com]:587 /etc/postfix/sasl_passwd [smtp.gmail.com]:587 [email protected]:user-one-pass-at-google I know I can map multiple users to multiple passwords using smtp_sasl_password_maps but that would mean that all relay would be done by gmail where I specifically want all relay to be done by the localhost with the exception of some users. Now I would like to have a user-two@localhost (etc) relay via google with their own respective passwords. Is that possible?

    Read the article

  • Apple MagicMouse randomly loses connection

    - by Yuval
    My Magic Mouse periodically randomly loses connection to my macbook, sitting approximately a foot and a half away from it. There are no software updates available for Snow Leopard (10.6.3). I have a bluetooth keyboard (that sits a little closer to the macbook, though I'm not sure this is the issue) that rarely suffers from this problem. Google results mostly suggest to update the software but I have the most updated version of it already. Does anybody have any idea what could be causing the issue? Is this simply a defective mouse? Additional info: I tried replacing the batteries with new ones, making sure they are sitting tightly in their position with no change in behavior (still disconnects periodically). Also, this doesn't seem to be a problem that the mouse is turning itself off to save battery - it does not reconnect when I move or click it. If I wait long enough (a couple of minutes) it reconnects on its own. Otherwise I have to go to the bluetooth menu with my macbook touchpad and choose "yuval's mouse - Connect"

    Read the article

  • Database Connectivity Test with UDL File

    - by Ben Griswold
    I bounced around between projects a lot last week.  What each project had in common was the need to validate at least one SQL connection.  Whether you have SQL tools like SSMS installed or not, this is a very easy task if you are aware of the UDL (Universal Data Link) files.  Create a new file and name it anything as long as it has the .udl extension. Open the file, choose a provider: Click Next >> or navigate to the Connection Tab to provide connection information.  Once you provide server and login credentials, the database list will populate.  At this point, you know the connection is valid. but go ahead and click the Test Connection button anyway. On the final tab, you can provide extra connection information like Application Name which can come in handy.  The All tab is beneficial if you want to build a valid connection string to include in your own applications.  If you save the file and then open in Notepad, you’ll find that said connection string: Provider=SQLOLEDB.1;Integrated Security=SSPI;Persist Security Info=False;Initial Catalog=master;Data Source=(local);Application Name=TestApp I hope this tip helps save you some time.  How do you test if you don’t have SSMS installed?

    Read the article

  • Go Big or Go Home

    - by user12601034
    For those who don’t know, Oracle sponsors a group called “OWL” – Oracle Women’s Leadership - and the purpose of the group is to create local and global opportunities that support, educate and empower current and future women leaders at Oracle. This week, I had the opportunity to attend the Denver OWL roadshow, and I was really impressed with the quality of speakers and interactions that I experienced. One theme that arose throughout the day was that of “Lean In.” In a nutshell, “Lean In” requires you to take advantage of the opportunities that you’re given. One of my personal mantras is “Go big or go home."  That is, if you’re not willing to give it your all, don’t do it at all. Regardless of how you phrase it, it’s a life lesson that I believe needs to be tossed in our face every so often simply, if for no other reason, to get our attention. You are given a finite amount of time in your life; in your job role; in your interactions with others. Do you make the most of the opportunities given to you every day? Or do you believe that life just happens, and you have to deal with whatever is handed to you? I have a challenge for you. Set aside any concerns or fears you have about something and Lean In. Make the most of an opportunity presented to you…or make your own opportunity! If you start with just one thing, you’ll start building a mindset to make the most of additional opportunities. Not only will you be better for leaning in, but I’m betting that those around you will be better for it as well.

    Read the article

  • As a C# developer, would you learn Java to develop for Android or use MonoDroid instead?

    - by Dan Tao
    I'd consider myself pretty well versed in C#. It's my language of choice at the moment, and it's where basically all my professional experience lies. Still, I'm puzzled by the existence of the MonoDroid project. My understanding has always been that C# and Java are very close. Like, if you know one, you can learn the other really quickly. So, as I've considered developing my first Android app, I just assumed I would familiarize myself with Java enough to get started and then just sort of learn as I go. Wouldn't this make more sense than using MonoDroid, which is likely to be less feature-rich than the Java Android SDK, and requires learning its own API (albeit a .NET API) anyway? I just feel like it would be better to learn a new language (and an extremely popular one at that) and get some experience in it—when it's so close to what you already know anyway—rather than stick with a technology you're experienced with, without gaining any more valuable skills. Maybe I'm grossly misrepresenting the average potential MonoDroid user. Maybe it's more for people who are experienced in Java and .NET and just prefer .NET. Or maybe (in fact it's likely) there are other factors I just haven't considered. I'm just wondering, why would you use MonoDroid instead of just developing for Android using Java?

    Read the article

  • I can't scroll my tilemap ... HELP!

    - by Sri Harsha Chilakapati
    Hello and I'm trying to make my own game engine in Java. I have completed all the necessary ones but I can't figure it out with the TileGame class. It just can't scroll. Also there are no exceptions. Here I'm listing the code. TileGame.java @Override public void draw(Graphics2D g) { if (back!=null){ back.render(g); } if (follower!=null){ follower.render(g); follower.draw(g); } for (int i=0; i<actors.size(); i++){ Actor actor = actors.get(i); if (actor!=follower&&getVisibleRect().intersects(actor.getBounds())){ g.drawImage(actor.getAnimation().getFrameImage(), actor.x - OffSetX, actor.y - OffSetY, null); actor.draw(g); } } } /** * This method returns the visible rectangle * @return The visible rectangle */ public Rectangle getVisibleRect(){ return new Rectangle(OffSetX, OffSetY, global.WIDTH, global.HEIGHT); } @Override public void update(){ if (follower!=null){ if (scrollHorizontally){ OffSetX = global.WIDTH/2 - Math.round((float)follower.x) - tileSize; OffSetX = Math.min(OffSetX, 0); OffSetX = Math.max(OffSetX, global.WIDTH - mapWidth); } if (scrollVertically){ OffSetY = global.HEIGHT/2 - Math.round((float)follower.y) - tileSize; OffSetY = Math.min(OffSetY, 0); OffSetY = Math.max(OffSetY, global.HEIGHT - mapHeight); } } for (int i=0; i<actors.size(); i++){ Actor actor1 = actors.get(i); if (getVisibleRect().contains(actor1.x, actor1.y)){ actor1.update(); for (int j=0; j<actors.size(); j++){ Actor actor2 = actors.get(j); if (actor1.isCollidingWith(actor2)){ actor1.collision(actor2); actor2.collision(actor1); } } } } } but the problem is that all the actors are working, but it just won't scroll. Help Please.. Thanks in Advance.

    Read the article

  • Extract High Quality Icons from Files Using a Free Tool

    - by Lori Kaufman
    If you need to extract an icon from a program file or other type of file (such as .dll files), there are many free tools available that make the task easy. However, very few will extract high quality icon images from the files. Most free icon extraction tools will extract smaller icon image sizes, such as 16×16, 32×32, or 48×48 pixels. Some icons come in larger sizes, such as icons used in Windows. There is a free, small utility, called BeCyIconGrabber, that allows you to view and save icons and cursors of any size contained in .exe, .dll, .icl, .ocx, .cpl, .src, .ico, and .cur files. You can save the extracted icons individually as a .png file, .bmp file, .ico file, or .cur file, or in groups within resource libraries, i.e., .dll or .icl files. BeCyIconGrabber can be downloaded as an installable file or as a portable executable that does not need to be installed. We downloaded the portable file. How To Properly Scan a Photograph (And Get An Even Better Image) The HTG Guide to Hiding Your Data in a TrueCrypt Hidden Volume Make Your Own Windows 8 Start Button with Zero Memory Usage

    Read the article

  • Announcing: Great Improvements to Windows Azure Web Sites

    - by ScottGu
    I’m excited to announce some great improvements to the Windows Azure Web Sites capability we first introduced earlier this summer.  Today’s improvements include: a new low-cost shared mode scaling option, support for custom domains with shared and reserved mode web-sites using both CNAME and A-Records (the later enabling naked domains), continuous deployment support using both CodePlex and GitHub, and FastCGI extensibility.  All of these improvements are now live in production and available to start using immediately. New “Shared” Scaling Tier Windows Azure allows you to deploy and host up to 10 web-sites in a free, shared/multi-tenant hosting environment. You can start out developing and testing web sites at no cost using this free shared mode, and it supports the ability to run web sites that serve up to 165MB/day of content (5GB/month).  All of the capabilities we introduced in June with this free tier remain the same with today’s update. Starting with today’s release, you can now elastically scale up your web-site beyond this capability using a new low-cost “shared” option (which we are introducing today) as well as using a “reserved instance” option (which we’ve supported since June).  Scaling to either of these modes is easy.  Simply click on the “scale” tab of your web-site within the Windows Azure Portal, choose the scaling option you want to use with it, and then click the “save” button.  Changes take only seconds to apply and do not require any code to be changed, nor the app to be redeployed: Below are some more details on the new “shared” option, as well as the existing “reserved” option: Shared Mode With today’s release we are introducing a new low-cost “shared” scaling mode for Windows Azure Web Sites.  A web-site running in shared mode is deployed in a shared/multi-tenant hosting environment.  Unlike the free tier, though, a web-site in shared mode has no quotas/upper-limit around the amount of bandwidth it can serve.  The first 5 GB/month of bandwidth you serve with a shared web-site is free, and then you pay the standard “pay as you go” Windows Azure outbound bandwidth rate for outbound bandwidth above 5 GB. A web-site running in shared mode also now supports the ability to map multiple custom DNS domain names, using both CNAMEs and A-records, to it.  The new A-record support we are introducing with today’s release provides the ability for you to support “naked domains” with your web-sites (e.g. http://microsoft.com in addition to http://www.microsoft.com).  We will also in the future enable SNI based SSL as a built-in feature with shared mode web-sites (this functionality isn’t supported with today’s release – but will be coming later this year to both the shared and reserved tiers). You pay for a shared mode web-site using the standard “pay as you go” model that we support with other features of Windows Azure (meaning no up-front costs, and you pay only for the hours that the feature is enabled).  A web-site running in shared mode costs only 1.3 cents/hr during the preview (so on average $9.36/month). Reserved Instance Mode In addition to running sites in shared mode, we also support scaling them to run within a reserved instance mode.  When running in reserved instance mode your sites are guaranteed to run isolated within your own Small, Medium or Large VM (meaning no other customers run within it).  You can run any number of web-sites within a VM, and there are no quotas on CPU or memory limits. You can run your sites using either a single reserved instance VM, or scale up to have multiple instances of them (e.g. 2 medium sized VMs, etc).  Scaling up or down is easy – just select the “reserved” instance VM within the “scale” tab of the Windows Azure Portal, choose the VM size you want, the number of instances of it you want to run, and then click save.  Changes take effect in seconds: Unlike shared mode, there is no per-site cost when running in reserved mode.  Instead you pay only for the reserved instance VMs you use – and you can run any number of web-sites you want within them at no extra cost (e.g. you could run a single site within a reserved instance VM or 100 web-sites within it for the same cost).  Reserved instance VMs start at 8 cents/hr for a small reserved VM.  Elastic Scale-up/down Windows Azure Web Sites allows you to scale-up or down your capacity within seconds.  This allows you to deploy a site using the shared mode option to begin with, and then dynamically scale up to the reserved mode option only when you need to – without you having to change any code or redeploy your application. If your site traffic starts to drop off, you can scale back down the number of reserved instances you are using, or scale down to the shared mode tier – all within seconds and without having to change code, redeploy, or adjust DNS mappings.  You can also use the “Dashboard” view within the Windows Azure Portal to easily monitor your site’s load in real-time (it shows not only requests/sec and bandwidth but also stats like CPU and memory usage). Because of Windows Azure’s “pay as you go” pricing model, you only pay for the compute capacity you use in a given hour.  So if your site is running most of the month in shared mode (at 1.3 cents/hr), but there is a weekend when it gets really popular and you decide to scale it up into reserved mode to have it run in your own dedicated VM (at 8 cents/hr), you only have to pay the additional pennies/hr for the hours it is running in the reserved mode.  There is no upfront cost you need to pay to enable this, and once you scale back down to shared mode you return to the 1.3 cents/hr rate.  This makes it super flexible and cost effective. Improved Custom Domain Support Web sites running in either “shared” or “reserved” mode support the ability to associate custom host names to them (e.g. www.mysitename.com).  You can associate multiple custom domains to each Windows Azure Web Site.  With today’s release we are introducing support for A-Records (a big ask by many users). With the A-Record support, you can now associate ‘naked’ domains to your Windows Azure Web Sites – meaning instead of having to use www.mysitename.com you can instead just have mysitename.com (with no sub-name prefix).  Because you can map multiple domains to a single site, you can optionally enable both a www and naked domain for a site (and then use a URL rewrite rule/redirect to avoid SEO problems). We’ve also enhanced the UI for managing custom domains within the Windows Azure Portal as part of today’s release.  Clicking the “Manage Domains” button in the tray at the bottom of the portal now brings up custom UI that makes it easy to manage/configure them: As part of this update we’ve also made it significantly smoother/easier to validate ownership of custom domains, and made it easier to switch existing sites/domains to Windows Azure Web Sites with no downtime. Continuous Deployment Support with Git and CodePlex or GitHub One of the more popular features we released earlier this summer was support for publishing web sites directly to Windows Azure using source control systems like TFS and Git.  This provides a really powerful way to manage your application deployments using source control.  It is really easy to enable this from a website’s dashboard page: The TFS option we shipped earlier this summer provides a very rich continuous deployment solution that enables you to automate builds and run unit tests every time you check in your web-site, and then if they are successful automatically publish to Azure. With today’s release we are expanding our Git support to also enable continuous deployment scenarios and integrate with projects hosted on CodePlex and GitHub.  This support is enabled with all web-sites (including those using the “free” scaling mode). Starting today, when you choose the “Set up Git publishing” link on a website’s “Dashboard” page you’ll see two additional options show up when Git based publishing is enabled for the web-site: You can click on either the “Deploy from my CodePlex project” link or “Deploy from my GitHub project” link to walkthrough a simple workflow to configure a connection between your website and a source repository you host on CodePlex or GitHub.  Once this connection is established, CodePlex or GitHub will automatically notify Windows Azure every time a checkin occurs.  This will then cause Windows Azure to pull the source and compile/deploy the new version of your app automatically.  The below two videos walkthrough how easy this is to enable this workflow and deploy both an initial app and then make a change to it: Enabling Continuous Deployment with Windows Azure Websites and CodePlex (2 minutes) Enabling Continuous Deployment with Windows Azure Websites and GitHub (2 minutes) This approach enables a really clean continuous deployment workflow, and makes it much easier to support a team development environment using Git: Note: today’s release supports establishing connections with public GitHub/CodePlex repositories.  Support for private repositories will be enabled in a few weeks. Support for multiple branches Previously, we only supported deploying from the git ‘master’ branch.  Often, though, developers want to deploy from alternate branches (e.g. a staging or future branch). This is now a supported scenario – both with standalone git based projects, as well as ones linked to CodePlex or GitHub.  This enables a variety of useful scenarios.  For example, you can now have two web-sites - a “live” and “staging” version – both linked to the same repository on CodePlex or GitHub.  You can configure one of the web-sites to always pull whatever is in the master branch, and the other to pull what is in the staging branch.  This enables a really clean way to enable final testing of your site before it goes live. This 1 minute video demonstrates how to configure which branch to use with a web-site. Summary The above features are all now live in production and available to use immediately.  If you don’t already have a Windows Azure account, you can sign-up for a free trial and start using them today.  Visit the Windows Azure Developer Center to learn more about how to build apps with it. We’ll have even more new features and enhancements coming in the weeks ahead – including support for the recent Windows Server 2012 and .NET 4.5 releases (we will enable new web and worker role images with Windows Server 2012 and .NET 4.5 next month).  Keep an eye out on my blog for details as these new features become available. Hope this helps, Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

  • Dicom: What are my options?

    - by Peter Turner
    After a cursory Googling, I can't find a legitimate list of DICOM vendors. I've tried DCM4Chee, Conquest, and PacsOne. Each server seems to have it's own quirks and annoyances, memory leaks, etc... I'd like to see what people use for their DICOM servers. Usually Wikipedia would have something like this at the bottom of the article, but it doesn't so I'm wondering if the SF community can create a canonical list. I must admit that whereas I do not represent any DICOM server vendor. There is a guy in my office who will buy me a huge burrito for each DICOM server I successfully install.

    Read the article

< Previous Page | 607 608 609 610 611 612 613 614 615 616 617 618  | Next Page >