content
stringlengths 86
88.9k
| title
stringlengths 0
150
| question
stringlengths 1
35.8k
| answers
sequence | answers_scores
sequence | non_answers
sequence | non_answers_scores
sequence | tags
sequence | name
stringlengths 30
130
|
---|---|---|---|---|---|---|---|---|
Q:
ASP.NET Proxy Application
Let me try to explain what I need. I have a server that is visible from the internet. What I need is to create a ASP.NET application that get the request of a web Site and send to a internal server, then it gets the response and publish the the info. For the client this should be totally transparent.
For different reasons I cannot redirect the port to the internal server. What I can do but no know how - maybe the answer is there - is to create a new Web Site that its host in the other server.
A:
Why won't any old proxy software work for this? Why does it need to be an ASP.NET application? There are TONS of tools out there (both Windows and *nix) that will get the job done quite easily. Check Squid or NetProxy for starters.
If you need to integrate with IIS, IISProxy looks like it would do the trick too.
A:
I use apache mod_proxy and mod_proxy_balancer. Works awesome running 5 domains a cluster of 4 web boxes.
| ASP.NET Proxy Application | Let me try to explain what I need. I have a server that is visible from the internet. What I need is to create a ASP.NET application that get the request of a web Site and send to a internal server, then it gets the response and publish the the info. For the client this should be totally transparent.
For different reasons I cannot redirect the port to the internal server. What I can do but no know how - maybe the answer is there - is to create a new Web Site that its host in the other server.
| [
"Why won't any old proxy software work for this? Why does it need to be an ASP.NET application? There are TONS of tools out there (both Windows and *nix) that will get the job done quite easily. Check Squid or NetProxy for starters.\nIf you need to integrate with IIS, IISProxy looks like it would do the trick too.\n",
"I use apache mod_proxy and mod_proxy_balancer. Works awesome running 5 domains a cluster of 4 web boxes.\n"
] | [
3,
1
] | [] | [] | [
".net",
"asp.net",
"iis"
] | stackoverflow_0000011460_.net_asp.net_iis.txt |
Q:
Is there any kind of non text interface to MySQL?
I have a MySQL query that returns a result with a single column of integers. Is there any way to get the MySQL C API to transfer this as actually integers rather than as ASCII text? For that matter is there a way to get MySQL to do /any/ of the API stuff as other than ASCII text. I'm thinking this would save a bit of time in sprintf/sscanf or whatever else is used as well as in bandwidth.
A:
You're probably out of luck, to be honest. Looking at the MySQL C API (http://dev.mysql.com/doc/refman/5.0/en/mysql-fetch-row.html, http://dev.mysql.com/doc/refman/5.0/en/c-api-datatypes.html, look at MYSQL_ROW) there doesn't seem to be a mechanism for returning data in its actual type... the joys of using structs I guess.
You could always implement a wrapper which checks against the MYSQL_ROW's type attribute (http://dev.mysql.com/doc/refman/5.0/en/c-api-datatypes.html) and returns a C union, but that's probably poor advice; don't do that.
| Is there any kind of non text interface to MySQL? | I have a MySQL query that returns a result with a single column of integers. Is there any way to get the MySQL C API to transfer this as actually integers rather than as ASCII text? For that matter is there a way to get MySQL to do /any/ of the API stuff as other than ASCII text. I'm thinking this would save a bit of time in sprintf/sscanf or whatever else is used as well as in bandwidth.
| [
"You're probably out of luck, to be honest. Looking at the MySQL C API (http://dev.mysql.com/doc/refman/5.0/en/mysql-fetch-row.html, http://dev.mysql.com/doc/refman/5.0/en/c-api-datatypes.html, look at MYSQL_ROW) there doesn't seem to be a mechanism for returning data in its actual type... the joys of using structs I guess. \nYou could always implement a wrapper which checks against the MYSQL_ROW's type attribute (http://dev.mysql.com/doc/refman/5.0/en/c-api-datatypes.html) and returns a C union, but that's probably poor advice; don't do that.\n"
] | [
1
] | [] | [] | [
"api",
"mysql"
] | stackoverflow_0000011686_api_mysql.txt |
Q:
Publishing to IIS - Best Practices
I'm not new to web publishing, BUT I am new to publishing against a web site that is frequently used. Previously, the apps on this server were not hit very often, but we're rolling out a high demand application. So, what is the best practice for publishing to a live web server?
Is it best to wait until the middle
of the night when people won't be on
it (Yes, I can pretty much rely on
that -- it's an intranet and
therefore will have times of
non-use)
Publish when new updates are made to
the trunk (dependent on build
success of course)
If 2 is true, then that seems bad if someone is using that specific page or DLL and it gets overwritten.
...I'm sure there are lots of great places for this kind of thing, but I didn't use the right google search terms.
A:
@Nick DeVore wrote:
If 2 is true, then that seems bad if
someone is using that specific page or
DLL and it gets overwritten.
It's not really an issue if you're using ASP.NET stack (Webforms, MVC or rolling your own) because all your aspx files get compiled and therefore not touched by webserver. /bin/ folder is completely shadowed somewhere else, so libraries inside are not used by webserver either.
IIS will wait until all requests are done (however there is some timeout though) and then will proceed with compilation (if needed) and restart of AppDomain. If only a few files have changed, there won't even be AppDomain restart. IIS will load new assemblies (or compiled aspx/asmx/ascx files) into existing AppDomain.
@Nick DeVore wrote:
Help me understand this a little bit
more. Point me to the place where this
is explained from Microsoft. Thanks!
Try google for "IIS AppDomain" keywords. I found What ASP.NET Programmers Should Know About Application Domains.
A:
We do most of our updates in the wee small hours.
Handy hint, if this is an ASP.NET site, whatever time of the day you roll out, drop in an App_Offline.htm file with a message explaining to users that the site is down for maintenance.
Scott Guthrie has more info here:
http://weblogs.asp.net/scottgu/archive/2006/04/09/442332.aspx
| Publishing to IIS - Best Practices | I'm not new to web publishing, BUT I am new to publishing against a web site that is frequently used. Previously, the apps on this server were not hit very often, but we're rolling out a high demand application. So, what is the best practice for publishing to a live web server?
Is it best to wait until the middle
of the night when people won't be on
it (Yes, I can pretty much rely on
that -- it's an intranet and
therefore will have times of
non-use)
Publish when new updates are made to
the trunk (dependent on build
success of course)
If 2 is true, then that seems bad if someone is using that specific page or DLL and it gets overwritten.
...I'm sure there are lots of great places for this kind of thing, but I didn't use the right google search terms.
| [
"\n@Nick DeVore wrote:\nIf 2 is true, then that seems bad if\nsomeone is using that specific page or\nDLL and it gets overwritten.\n\nIt's not really an issue if you're using ASP.NET stack (Webforms, MVC or rolling your own) because all your aspx files get compiled and therefore not touched by webserver. /bin/ folder is completely shadowed somewhere else, so libraries inside are not used by webserver either.\nIIS will wait until all requests are done (however there is some timeout though) and then will proceed with compilation (if needed) and restart of AppDomain. If only a few files have changed, there won't even be AppDomain restart. IIS will load new assemblies (or compiled aspx/asmx/ascx files) into existing AppDomain.\n\n@Nick DeVore wrote:\nHelp me understand this a little bit\nmore. Point me to the place where this\nis explained from Microsoft. Thanks!\n\nTry google for \"IIS AppDomain\" keywords. I found What ASP.NET Programmers Should Know About Application Domains.\n",
"We do most of our updates in the wee small hours.\nHandy hint, if this is an ASP.NET site, whatever time of the day you roll out, drop in an App_Offline.htm file with a message explaining to users that the site is down for maintenance. \nScott Guthrie has more info here:\nhttp://weblogs.asp.net/scottgu/archive/2006/04/09/442332.aspx\n"
] | [
2,
0
] | [] | [] | [
"iis",
"publish"
] | stackoverflow_0000011764_iis_publish.txt |
Q:
Personalized icon next to the URL address in the browser's address bar?
How do I write code where a company icon appears on the left side next to the URL address in the browser's address bar?
A:
You are looking for a Favicon.
A:
it loads www.whateveryouron.com/favicon.ico
if your domain is robermyers.com, just put a favicon.ico 16px icon thats accessible, and you're in like flint.
just try this googles or stackoverflows
A:
You need to create a favicon. The favicon uses a standard (in Windows, at least) .ico file. If you have a logo, you can convert it at sites like http://www.favicongenerator.com/
In the <head> of your html page, use the <link> tag to define the location of the favicon like this:
<link type="image/x-icon" rel="shortcut icon" href="favicon.ico" />
You may need to refresh the page for the icon to display.
A:
load a file on the webserver called favicon.ico
A:
In modern browsers other fileformats than .ico works aswell. You can even put animated gif's in there!
A:
DynamicDrive has an easy favicon maker too.
Different pages/directories can also call a different .ico with code similar to y0mbo's example (this is what Mint uses):
<link href="sc/ph645.43/images/icons/mint.ico" rel="icon" type="text/x-icon" />
This is a good article on favicons in detail.
I suspect animated favicons have the potential to go the way of the blink tag. Mint is using theirs to match their overall design, if you visit in FF3, you'll see that it matches the green color that FF uses to designate a recognized SSL cert.
| Personalized icon next to the URL address in the browser's address bar? | How do I write code where a company icon appears on the left side next to the URL address in the browser's address bar?
| [
"You are looking for a Favicon.\n",
"it loads www.whateveryouron.com/favicon.ico\nif your domain is robermyers.com, just put a favicon.ico 16px icon thats accessible, and you're in like flint.\njust try this googles or stackoverflows\n",
"You need to create a favicon. The favicon uses a standard (in Windows, at least) .ico file. If you have a logo, you can convert it at sites like http://www.favicongenerator.com/\nIn the <head> of your html page, use the <link> tag to define the location of the favicon like this:\n<link type=\"image/x-icon\" rel=\"shortcut icon\" href=\"favicon.ico\" />\n\nYou may need to refresh the page for the icon to display.\n",
"load a file on the webserver called favicon.ico\n",
"In modern browsers other fileformats than .ico works aswell. You can even put animated gif's in there!\n",
"DynamicDrive has an easy favicon maker too. \nDifferent pages/directories can also call a different .ico with code similar to y0mbo's example (this is what Mint uses):\n<link href=\"sc/ph645.43/images/icons/mint.ico\" rel=\"icon\" type=\"text/x-icon\" />\n\nThis is a good article on favicons in detail. \nI suspect animated favicons have the potential to go the way of the blink tag. Mint is using theirs to match their overall design, if you visit in FF3, you'll see that it matches the green color that FF uses to designate a recognized SSL cert.\n"
] | [
8,
4,
4,
0,
0,
0
] | [] | [] | [
"html"
] | stackoverflow_0000010072_html.txt |
Q:
best way to persist data in .NET Web Service
I have a web service that queries data from this json file, but I don't want the web service to have to access the file every time. I'm thinking that maybe I can store the data somewhere else (maybe in memory) so the web service can just get the data from there the next time it's trying to query the same data. I kinda understand what needs to be done but I'm just not sure how to actually do it. How do we persist data in a web service?
Update:
Both suggestions, caching and using static variables, look good. Maybe I should just use both so I can look at one first, and if it's not in there, use the second one, if it's not in there either, then I'll look at the json file.
A:
Extending on Ice^^Heat's idea, you might want to think about where you would cache - either cache the contents of the json file in the Application cache like so:
Context.Cache.Insert("foo", _
Foo, _
Nothing, _
DateAdd(DateInterval.Minute, 30, Now()), _
System.Web.Caching.Cache.NoSlidingExpiration)
And then generate the results you need from that on every hit. Alternatively you can cache the webservice output on the function definition:
<WebMethod(CacheDuration:=60)> _
Public Function HelloWorld() As String
Return "Hello World"
End Function
Info gathered from XML Web Service Caching Strategies.
A:
What about using a global or static collection object? Is that a good idea?
A:
ASP.NET caching works just as well with Web services so you can implement regular caching as explained here: http://msdn.microsoft.com/en-us/library/aa478965.aspx
A:
To echo klughing, if your JSON data isn't expected to change often, I think the simplest way to cache it is to use a static collection of some kind - perhaps a DataTable.
First, parse your JSON data into a System.Data.DataTable, and make it static in your Web service class. Then, access the static object. The data should stay cached until IIS recycles your application pool.
public class WebServiceClass
{
private static DataTable _myData = null;
public static DataTable MyData
{
get
{
if (_myData == null)
{
_myData = ParseJsonDataReturnDT();
}
return _myData;
}
}
[WebMethod]
public string GetData()
{
//... do some stuff with MyData and return a string ...
return MyData.Rows[0]["MyColumn"].ToString();
}
}
| best way to persist data in .NET Web Service | I have a web service that queries data from this json file, but I don't want the web service to have to access the file every time. I'm thinking that maybe I can store the data somewhere else (maybe in memory) so the web service can just get the data from there the next time it's trying to query the same data. I kinda understand what needs to be done but I'm just not sure how to actually do it. How do we persist data in a web service?
Update:
Both suggestions, caching and using static variables, look good. Maybe I should just use both so I can look at one first, and if it's not in there, use the second one, if it's not in there either, then I'll look at the json file.
| [
"Extending on Ice^^Heat's idea, you might want to think about where you would cache - either cache the contents of the json file in the Application cache like so: \nContext.Cache.Insert(\"foo\", _\n Foo, _\n Nothing, _\n DateAdd(DateInterval.Minute, 30, Now()), _\n System.Web.Caching.Cache.NoSlidingExpiration)\n\nAnd then generate the results you need from that on every hit. Alternatively you can cache the webservice output on the function definition: \n<WebMethod(CacheDuration:=60)> _\nPublic Function HelloWorld() As String\n Return \"Hello World\"\nEnd Function\n\nInfo gathered from XML Web Service Caching Strategies.\n",
"What about using a global or static collection object? Is that a good idea?\n",
"ASP.NET caching works just as well with Web services so you can implement regular caching as explained here: http://msdn.microsoft.com/en-us/library/aa478965.aspx\n",
"To echo klughing, if your JSON data isn't expected to change often, I think the simplest way to cache it is to use a static collection of some kind - perhaps a DataTable.\nFirst, parse your JSON data into a System.Data.DataTable, and make it static in your Web service class. Then, access the static object. The data should stay cached until IIS recycles your application pool.\npublic class WebServiceClass\n{\n private static DataTable _myData = null;\n public static DataTable MyData\n {\n get\n {\n if (_myData == null)\n {\n _myData = ParseJsonDataReturnDT();\n }\n return _myData;\n }\n }\n\n [WebMethod]\n public string GetData()\n {\n //... do some stuff with MyData and return a string ...\n return MyData.Rows[0][\"MyColumn\"].ToString();\n }\n}\n\n"
] | [
5,
3,
2,
2
] | [] | [] | [
".net",
"json",
"memory",
"persistence",
"web_services"
] | stackoverflow_0000011761_.net_json_memory_persistence_web_services.txt |
Q:
How do I write those cool command line code generators (rails, sproutcore, webgen)
I've grokked the code for all the aforementioned apps and I still can't find a straightforward way to create a static directory structure from a single command.
A:
Check out rubigen. You can also view a presentation by dr nic.
A:
So what you want is to able to issue a command that will generate an entire directory tree?
cp -r <template> <destination>
Or am I misunderstanding? If you want to generate a consistent directory structure, your best bet is to simply copy it from a template. Fast, easy, done.
| How do I write those cool command line code generators (rails, sproutcore, webgen) | I've grokked the code for all the aforementioned apps and I still can't find a straightforward way to create a static directory structure from a single command.
| [
"Check out rubigen. You can also view a presentation by dr nic.\n",
"So what you want is to able to issue a command that will generate an entire directory tree?\ncp -r <template> <destination>\n\nOr am I misunderstanding? If you want to generate a consistent directory structure, your best bet is to simply copy it from a template. Fast, easy, done.\n"
] | [
5,
0
] | [] | [] | [
"ruby"
] | stackoverflow_0000010595_ruby.txt |
Q:
Standard Signature a Text in a Message using Exchange Server
Anyone know how to do this without using a third party program? If there no way to do it with a add-on someone can recommend one?
EDIT: I need to add this in the server so all users have the same signature.
Thanks
A:
You need to create your own exchange message sink to do this. Here's a classic VB example from MS KB:
http://support.microsoft.com/kb/317327
and a VB Script one:
http://support.microsoft.com/kb/317680
And lots of goodness from MSDN about Exchange 2003 Transport Event Sinks:
http://msdn.microsoft.com/en-us/library/ms526223(EXCHG.10).aspx
If you're running Exchange 2007 then you can use Transport Rules:
http://msexchangeteam.com/archive/2006/12/12/431879.aspx
http://www.msexchange.org/tutorials/Using-Transport-Rules-Creating-Disclaimers-Exchange-Server-2007.html
A:
We used CodeTwo-s Exchange rules for a while on Exchange 2003.
However there is a known problem with it: if the messages stay in the queue for 2-3 minutes, the Exchange itself sends out the message without the footer. Most of the times it's not a problem, but we have something like 700 people in our organization. If there are a lot of emails and some of them contains attachments, then the virus scanner stops them for a while (MS Antigen).
Otherwise it's a perfect solution if you have a smaller group of users to manage.
From other point of view: our users like to have some kind of control over the signature. We generated them and put it to their Outlooks. They like it that they know and see that the signature is there and how it looks like.
| Standard Signature a Text in a Message using Exchange Server | Anyone know how to do this without using a third party program? If there no way to do it with a add-on someone can recommend one?
EDIT: I need to add this in the server so all users have the same signature.
Thanks
| [
"You need to create your own exchange message sink to do this. Here's a classic VB example from MS KB:\nhttp://support.microsoft.com/kb/317327\nand a VB Script one:\nhttp://support.microsoft.com/kb/317680\nAnd lots of goodness from MSDN about Exchange 2003 Transport Event Sinks:\nhttp://msdn.microsoft.com/en-us/library/ms526223(EXCHG.10).aspx\nIf you're running Exchange 2007 then you can use Transport Rules:\nhttp://msexchangeteam.com/archive/2006/12/12/431879.aspx\nhttp://www.msexchange.org/tutorials/Using-Transport-Rules-Creating-Disclaimers-Exchange-Server-2007.html\n",
"We used CodeTwo-s Exchange rules for a while on Exchange 2003.\nHowever there is a known problem with it: if the messages stay in the queue for 2-3 minutes, the Exchange itself sends out the message without the footer. Most of the times it's not a problem, but we have something like 700 people in our organization. If there are a lot of emails and some of them contains attachments, then the virus scanner stops them for a while (MS Antigen).\nOtherwise it's a perfect solution if you have a smaller group of users to manage.\nFrom other point of view: our users like to have some kind of control over the signature. We generated them and put it to their Outlooks. They like it that they know and see that the signature is there and how it looks like.\n"
] | [
2,
0
] | [] | [] | [
"exchange_server",
"outlook"
] | stackoverflow_0000011275_exchange_server_outlook.txt |
Q:
mod_rewrite to alias one file suffix type to another
I hope I can explain this clearly enough, but if not let me know and I'll try to clarify.
I'm currently developing a site using ColdFusion and have a mod_rewrite rule in place to make it look like the site is using PHP. Any requests for index.php get processed by index.cfm (the rule maps *.php to *.cfm).
This works great - so far, so good. The problem is that I want to return a 404 status code if index.cfm (or any ColdFusion page) is requested directly.
If I try to block access to *.cfm files using mod_rewrite it also returns a 404 for requests to *.php.
I figure I might have to change my Apache config rather than use .htaccess
A:
You can use the S flag to skip the 404 rule, like this:
RewriteEngine on
# Do not separate these two rules so long as the first has S=1
RewriteRule (.*)\.php$ $1.cfm [S=1]
RewriteRule \.cfm$ - [R=404]
If you are also using the Alias option then you should also add the PT flag. See the mod_rewrite documentation for details.
A:
Post the rules you already have as a starting point so people don't have to recreate it to help you.
I would suggest testing [L] on the rule that maps .php to .cfm files as the first thing to try.
A:
You have to use two distinct groups of rewrite rules, one for .php, the other for .chm and make them mutually exclusives with RewriteCond %{REQUEST_FILENAME}. And make use of the flag [L] as suggested by jj33.
You can keep your rules in .htaccess.
| mod_rewrite to alias one file suffix type to another | I hope I can explain this clearly enough, but if not let me know and I'll try to clarify.
I'm currently developing a site using ColdFusion and have a mod_rewrite rule in place to make it look like the site is using PHP. Any requests for index.php get processed by index.cfm (the rule maps *.php to *.cfm).
This works great - so far, so good. The problem is that I want to return a 404 status code if index.cfm (or any ColdFusion page) is requested directly.
If I try to block access to *.cfm files using mod_rewrite it also returns a 404 for requests to *.php.
I figure I might have to change my Apache config rather than use .htaccess
| [
"You can use the S flag to skip the 404 rule, like this:\nRewriteEngine on\n\n# Do not separate these two rules so long as the first has S=1\nRewriteRule (.*)\\.php$ $1.cfm [S=1]\nRewriteRule \\.cfm$ - [R=404]\n\nIf you are also using the Alias option then you should also add the PT flag. See the mod_rewrite documentation for details.\n",
"Post the rules you already have as a starting point so people don't have to recreate it to help you.\nI would suggest testing [L] on the rule that maps .php to .cfm files as the first thing to try.\n",
"You have to use two distinct groups of rewrite rules, one for .php, the other for .chm and make them mutually exclusives with RewriteCond %{REQUEST_FILENAME}. And make use of the flag [L] as suggested by jj33.\nYou can keep your rules in .htaccess.\n"
] | [
1,
0,
0
] | [] | [] | [
"apache",
"mod_rewrite",
"seo"
] | stackoverflow_0000008832_apache_mod_rewrite_seo.txt |
Q:
Visual Studio Setup Project Custom Dialog
I have created a custom dialog for Visual Studio Setup Project using the steps described
here
Now I have a combobox in one of my dialogs. I want to populate the combobox with a list of all SQL Server instances running on the local network. It's trivial to get the server list ... but I'm completely lost on how to make them display in the combobox. I would appreciate your help and some code might also be nice as I'm beginner :).
A:
I've always found the custom dialogs in visual studio setup projects to be woefully limited and barely functional.
By contrast, I normally create custom actions that display winforms gui's for any remotely difficult tasks during setup. Works really well and you can do just about anything you want by creating a custom action and passing a few parameters across.
In the dayjob we built a collection of common custom actions for tasks like application config and database creation / script execution to get around custom dialog limitations.
A:
I guess you'll have to go beyond the out-of-the-box setup and deployment package and try a third party app.
You may want to look at:
Wix
Nullsoft Scriptable Install System
Both are free; they might give you the customization that you need.
| Visual Studio Setup Project Custom Dialog | I have created a custom dialog for Visual Studio Setup Project using the steps described
here
Now I have a combobox in one of my dialogs. I want to populate the combobox with a list of all SQL Server instances running on the local network. It's trivial to get the server list ... but I'm completely lost on how to make them display in the combobox. I would appreciate your help and some code might also be nice as I'm beginner :).
| [
"I've always found the custom dialogs in visual studio setup projects to be woefully limited and barely functional.\nBy contrast, I normally create custom actions that display winforms gui's for any remotely difficult tasks during setup. Works really well and you can do just about anything you want by creating a custom action and passing a few parameters across.\nIn the dayjob we built a collection of common custom actions for tasks like application config and database creation / script execution to get around custom dialog limitations.\n",
"I guess you'll have to go beyond the out-of-the-box setup and deployment package and try a third party app.\nYou may want to look at:\n\nWix\nNullsoft Scriptable Install System\n\nBoth are free; they might give you the customization that you need.\n"
] | [
13,
1
] | [] | [] | [
".net",
"installation",
"projects",
"visual_studio",
"windows_installer"
] | stackoverflow_0000011439_.net_installation_projects_visual_studio_windows_installer.txt |
Q:
Using mod_rewrite to Mimic SSL Virtual Hosts?
What is the best way to transparently rewrite a URL over an SSL connection with Apache 2.2?
Apache 2 does not natively support multiple name-based virtual hosts for an SSL connection and I have heard that mod_rewrite can help with this. I would like to do something like this:
I have set up the server so that the sites can be accessed by
https://secure.example.com/dbadmin
but I would like to have this as https://dbadmin.example.com
How do I set it up so that the Rewrite rule will rewrite dbadmin.example.com to secure.example.com/dbadmin, but without displaying the rewrite on the client's address bar (i.e. the client will still just see dbadmin.example.com), all over https?
A:
Configure a single VirtualHost to serve both secure.example.com and dbadmin.example.com (making it the only *:443 VirtualHost achieves this). You can then use mod_rewrite to adjust the URI for requests to dbadmin.example.com:
<VirtualHost *:443>
ServerName secure.example.com
ServerAlias dbadmin.example.com
RewriteEngine on
RewriteCond %{SERVER_NAME} dbadmin.example.com
RewriteRule !/dbadmin(.*)$ /dbadmin$1
</VirtualHost>
Your SSL certificate will need to be valid for both secure.example.com and dbadmin.example.com. It can be a wildcard certificate as mentioned by Terry Lorber, or you can use the subjectAltName field to add additional host names.
If you're having trouble, first set it up on <VirtualHost *> and check that it works without SSL. The SSL connection and certificate is a separate layer of complexity that you can set up after the URI rewriting is working.
A:
Unless your SSL certificate is the "wildcard" or multi-site kind, then I don't think this will work. The rewrite will display in the browser and the name in the address bar must be valid against the certificate, or your users will see a security error (which they can always accept and continue, but that doesn't sound like what you'd like).
More here.
A:
There is apaches mod_rewrite, or you could setup apache to direct https://dbadmin.example.com to path/to/example.com/dbadmin on the server
<VirtualHost *>
ServerName subdomain.domain.com
DocumentRoot /home/httpd/htdocs/subdomain/
</VirtualHost>
| Using mod_rewrite to Mimic SSL Virtual Hosts? | What is the best way to transparently rewrite a URL over an SSL connection with Apache 2.2?
Apache 2 does not natively support multiple name-based virtual hosts for an SSL connection and I have heard that mod_rewrite can help with this. I would like to do something like this:
I have set up the server so that the sites can be accessed by
https://secure.example.com/dbadmin
but I would like to have this as https://dbadmin.example.com
How do I set it up so that the Rewrite rule will rewrite dbadmin.example.com to secure.example.com/dbadmin, but without displaying the rewrite on the client's address bar (i.e. the client will still just see dbadmin.example.com), all over https?
| [
"Configure a single VirtualHost to serve both secure.example.com and dbadmin.example.com (making it the only *:443 VirtualHost achieves this). You can then use mod_rewrite to adjust the URI for requests to dbadmin.example.com:\n<VirtualHost *:443>\n ServerName secure.example.com\n ServerAlias dbadmin.example.com\n\n RewriteEngine on\n RewriteCond %{SERVER_NAME} dbadmin.example.com\n RewriteRule !/dbadmin(.*)$ /dbadmin$1\n</VirtualHost>\n\nYour SSL certificate will need to be valid for both secure.example.com and dbadmin.example.com. It can be a wildcard certificate as mentioned by Terry Lorber, or you can use the subjectAltName field to add additional host names.\nIf you're having trouble, first set it up on <VirtualHost *> and check that it works without SSL. The SSL connection and certificate is a separate layer of complexity that you can set up after the URI rewriting is working.\n",
"Unless your SSL certificate is the \"wildcard\" or multi-site kind, then I don't think this will work. The rewrite will display in the browser and the name in the address bar must be valid against the certificate, or your users will see a security error (which they can always accept and continue, but that doesn't sound like what you'd like).\nMore here.\n",
"There is apaches mod_rewrite, or you could setup apache to direct https://dbadmin.example.com to path/to/example.com/dbadmin on the server\n<VirtualHost *>\nServerName subdomain.domain.com\nDocumentRoot /home/httpd/htdocs/subdomain/\n</VirtualHost>\n\n"
] | [
3,
2,
0
] | [] | [] | [
"apache",
"https",
"mod_rewrite"
] | stackoverflow_0000008355_apache_https_mod_rewrite.txt |
Q:
ASP.NET UserControl's and DefaultEvent
Outline
OK, I have Google'd this and already expecting a big fat NO!! But I thought I should ask since I know sometimes there can be the odd little gem of knowledge lurking around in peoples heads ^_^
I am working my way through some excercises in a book for study, and this particular exercise is User Controls. I have cobbled together a control and would like to set the DefaultEvent for it (having done this for previous controls) so when I double-click it, the default event created is whatever I specify it to be.
NOTE: This is a standard User Control (.ascx), NOT a custom rendered control.
Current Code
Here is the class & event definition:
[System.ComponentModel.DefaultEvent("OKClicked")]
public partial class AddressBox : System.Web.UI.UserControl
{
public event EventHandler OKClicked;
Current Result
Now, when I double click the the control when it is on a ASPX page, the following is created:
protected void AddressBox1_Load(object sender, EventArgs e)
{
}
Not quite what I was expecting! So, my question:
Is it possible to define a DefaultEvent for a UserControl? Is it a hack? If it's [not] supported, is there a reason?
Side Note: How do we put underscores in code? I cant seem to put and escape char in?
A:
Here is a possible answer, without testing (like martin did).
In reflector, you will see that the DefaultEventAttribute allows itself to be inherited.
In reflector, you see that the UserControl class has it's default event set to the Load event.
So the possible reason is that even though you are decorating your user control with the default event of OKClick, VS might still be thinking that the default event is load, as it's being inherited from UserControl whose default event is Load.
Just a high level guess at what might be happening.
A:
OK, I checked this out, Inheriting from WebControl rather than UserControl.. All worked fine.
Looks like Darren Kopp takes the crown for this one! Thanks for the input!
| ASP.NET UserControl's and DefaultEvent | Outline
OK, I have Google'd this and already expecting a big fat NO!! But I thought I should ask since I know sometimes there can be the odd little gem of knowledge lurking around in peoples heads ^_^
I am working my way through some excercises in a book for study, and this particular exercise is User Controls. I have cobbled together a control and would like to set the DefaultEvent for it (having done this for previous controls) so when I double-click it, the default event created is whatever I specify it to be.
NOTE: This is a standard User Control (.ascx), NOT a custom rendered control.
Current Code
Here is the class & event definition:
[System.ComponentModel.DefaultEvent("OKClicked")]
public partial class AddressBox : System.Web.UI.UserControl
{
public event EventHandler OKClicked;
Current Result
Now, when I double click the the control when it is on a ASPX page, the following is created:
protected void AddressBox1_Load(object sender, EventArgs e)
{
}
Not quite what I was expecting! So, my question:
Is it possible to define a DefaultEvent for a UserControl? Is it a hack? If it's [not] supported, is there a reason?
Side Note: How do we put underscores in code? I cant seem to put and escape char in?
| [
"Here is a possible answer, without testing (like martin did). \nIn reflector, you will see that the DefaultEventAttribute allows itself to be inherited.\nIn reflector, you see that the UserControl class has it's default event set to the Load event.\nSo the possible reason is that even though you are decorating your user control with the default event of OKClick, VS might still be thinking that the default event is load, as it's being inherited from UserControl whose default event is Load.\nJust a high level guess at what might be happening.\n",
"OK, I checked this out, Inheriting from WebControl rather than UserControl.. All worked fine.\nLooks like Darren Kopp takes the crown for this one! Thanks for the input!\n"
] | [
0,
0
] | [] | [] | [
"asp.net",
"attributes",
"c#",
"user_controls"
] | stackoverflow_0000011267_asp.net_attributes_c#_user_controls.txt |
Q:
In .NET, will empty method calls be optimized out?
Given an empty method body, will the JIT optimize out the call (I know the C# compiler won't). How would I go about finding out? What tools should I be using and where should I be looking?
Since I'm sure it'll be asked, the reason for the empty method is a preprocessor directive.
@Chris:
Makes sense, but it could optimize out calls to the method. So the method would still exist, but static calls to it could be removed (or at least inlined...)
@Jon:
That just tells me the language compiler doesn't do anything. I think what I need to do is run my dll through ngen and look at the assembly.
A:
This chap has quite a good treatment of JIT optimisations, do a search on the page for 'method is empty', it's about half way down the article -
http://www.codeproject.com/KB/dotnet/JITOptimizations.aspx
Apparently empty methods do get optimised out through inlining what is effectively no code.
@Chris: I do realise the that the methods will still be part of the binary and that these are JIT optimisations :-). On a semi-related note, Scott Hanselman had quite an interesting article on inlining in Release build call stacks:
http://www.hanselman.com/blog/ReleaseISNOTDebug64bitOptimizationsAndCMethodInliningInReleaseBuildCallStacks.aspx
A:
I'm guessing your code is like:
void DoSomethingIfCompFlag() {
#if COMPILER_FLAG
//your code
#endif
}
This won't get optimised out, however:
partial void DoSomethingIfCompFlag();
#if COMPILER_FLAG
partial void DoSomethingIfCompFlag() {
//your code
}
#endif
The first empty method is partial, and the C#3 compiler will optimise it out.
By the way: this is basically what partial methods are for. Microsoft added code generators to their Linq designers that need to call methods that by default don't do anything.
Rather than force you to overload the method you can use a partial.
This way the partials are completely optimised out if not used and no performance is lost, rather than adding the overhead of the extra empty method call.
A:
No, empty methods are never optimized out. Here are a couple reasons why:
The method could be called from a
derived class, perhaps in a
different assembly
The method could
be called using Reflection (even if
it is marked private)
Edit: Yes, from looking at that (exellent) code project doc the JITer will eliminate calls to empty methods. But the methods themselves will still be compiled and part of your binary for the reasons I listed.
A:
All things being equal, yes it should be optimized out. The JIT inlines functions where appropriate and there are few things more appropriate than empty functions :)
If you really want to be sure then change your empty method to throw an exception and print out the stack trace it contains.
| In .NET, will empty method calls be optimized out? | Given an empty method body, will the JIT optimize out the call (I know the C# compiler won't). How would I go about finding out? What tools should I be using and where should I be looking?
Since I'm sure it'll be asked, the reason for the empty method is a preprocessor directive.
@Chris:
Makes sense, but it could optimize out calls to the method. So the method would still exist, but static calls to it could be removed (or at least inlined...)
@Jon:
That just tells me the language compiler doesn't do anything. I think what I need to do is run my dll through ngen and look at the assembly.
| [
"This chap has quite a good treatment of JIT optimisations, do a search on the page for 'method is empty', it's about half way down the article -\nhttp://www.codeproject.com/KB/dotnet/JITOptimizations.aspx\nApparently empty methods do get optimised out through inlining what is effectively no code.\n@Chris: I do realise the that the methods will still be part of the binary and that these are JIT optimisations :-). On a semi-related note, Scott Hanselman had quite an interesting article on inlining in Release build call stacks:\nhttp://www.hanselman.com/blog/ReleaseISNOTDebug64bitOptimizationsAndCMethodInliningInReleaseBuildCallStacks.aspx\n",
"I'm guessing your code is like:\nvoid DoSomethingIfCompFlag() {\n#if COMPILER_FLAG\n //your code\n#endif\n}\n\nThis won't get optimised out, however:\npartial void DoSomethingIfCompFlag();\n\n#if COMPILER_FLAG\npartial void DoSomethingIfCompFlag() {\n //your code\n}\n#endif\n\nThe first empty method is partial, and the C#3 compiler will optimise it out.\n\nBy the way: this is basically what partial methods are for. Microsoft added code generators to their Linq designers that need to call methods that by default don't do anything. \nRather than force you to overload the method you can use a partial. \nThis way the partials are completely optimised out if not used and no performance is lost, rather than adding the overhead of the extra empty method call.\n",
"No, empty methods are never optimized out. Here are a couple reasons why:\n\nThe method could be called from a\nderived class, perhaps in a\ndifferent assembly \nThe method could\nbe called using Reflection (even if\nit is marked private)\n\nEdit: Yes, from looking at that (exellent) code project doc the JITer will eliminate calls to empty methods. But the methods themselves will still be compiled and part of your binary for the reasons I listed.\n",
"All things being equal, yes it should be optimized out. The JIT inlines functions where appropriate and there are few things more appropriate than empty functions :)\nIf you really want to be sure then change your empty method to throw an exception and print out the stack trace it contains.\n"
] | [
15,
13,
2,
1
] | [] | [] | [
".net",
"performance"
] | stackoverflow_0000011783_.net_performance.txt |
Q:
What is the best way to upload a file via an HTTP POST with a web form?
Basically, something better than this:
<input type="file" name="myfile" size="50">
First of all, the browse button looks different on every browser. Unlike the submit button on a form, you have to come up with some hack-y way to style it.
Secondly, there's no progress indicator showing you how much of the file has uploaded. You usually have to implement some kind of client-side way to disable multiple submits (e.g. change the submit button to a disabled button showing "Form submitting... please wait.") or flash a giant warning.
Are there any good solutions to this that don't use Flash or Java?
Yaakov: That product looks to be exactly what I'm looking for, but the cost is $1000 and its specifically for ASP.NET. Are there any open source projects that cover the same or similar functionality?
A:
File upload boxes is where we're currently at if you don't want to involve other technologies like Flash, Java or ActiveX.
With plain HTML you are pretty much limited to the experience you've described (no progress bar, double submits, etc). If you are willing to use some javascript, you can solve some of the problems by giving feedback that the upload is in progress and even showing the upload progress (it is a hack because you shouldn't have to do a full round-trip to the server and back, but at least it works).
If you are willing to use Flash (which is available pretty much anywhere and on many platforms), you can overcome pretty much all of these problems. A quick googling turned up two such components, both of them free and open source. I never used any of them, but they look good. BTW, Flash isn't without its problems either, for example when using the multi-file uploader for slide share, the browser kept constantly crashing on me :-(
Probably the best solution currently is to detect dynamically if the user has Flash, and if it's the case, give her the flash version of the uploader, while still making it possible to choose the basic HTML one.
HTH
A:
You could have a look at the Fancy Upload script. Though it uses flash it still looks great.
A:
The problem here is that the browsers specifically work to block anything that changes the basic file upload input control. You can't change it with javascript for instance.
The reason is security - if I could script it I could build a page that when you visited it sent me various files from your hard disk. Not nice.
There are various workarounds at the moment, but they're different between IE and FX (I don't know about Safari, Opera, etc).
Look at what http://www.gmail.com does in IE and FX when you attach something to an e-mail.
I want to see that rubbish "Browse" button - it tells me that I'm not letting anything unexpected in.
A:
It is true, the file upload control is definitely behind the times. Hopefully this will be addressed in a future asp.net version.
Though it costs some money, I have found the Telerik upload control to have all of the functionality that you are looking for, including styling and progress updates (it also optimizes memory for large uploads).
| What is the best way to upload a file via an HTTP POST with a web form? | Basically, something better than this:
<input type="file" name="myfile" size="50">
First of all, the browse button looks different on every browser. Unlike the submit button on a form, you have to come up with some hack-y way to style it.
Secondly, there's no progress indicator showing you how much of the file has uploaded. You usually have to implement some kind of client-side way to disable multiple submits (e.g. change the submit button to a disabled button showing "Form submitting... please wait.") or flash a giant warning.
Are there any good solutions to this that don't use Flash or Java?
Yaakov: That product looks to be exactly what I'm looking for, but the cost is $1000 and its specifically for ASP.NET. Are there any open source projects that cover the same or similar functionality?
| [
"File upload boxes is where we're currently at if you don't want to involve other technologies like Flash, Java or ActiveX.\nWith plain HTML you are pretty much limited to the experience you've described (no progress bar, double submits, etc). If you are willing to use some javascript, you can solve some of the problems by giving feedback that the upload is in progress and even showing the upload progress (it is a hack because you shouldn't have to do a full round-trip to the server and back, but at least it works).\nIf you are willing to use Flash (which is available pretty much anywhere and on many platforms), you can overcome pretty much all of these problems. A quick googling turned up two such components, both of them free and open source. I never used any of them, but they look good. BTW, Flash isn't without its problems either, for example when using the multi-file uploader for slide share, the browser kept constantly crashing on me :-(\nProbably the best solution currently is to detect dynamically if the user has Flash, and if it's the case, give her the flash version of the uploader, while still making it possible to choose the basic HTML one.\nHTH\n",
"You could have a look at the Fancy Upload script. Though it uses flash it still looks great.\n",
"The problem here is that the browsers specifically work to block anything that changes the basic file upload input control. You can't change it with javascript for instance.\nThe reason is security - if I could script it I could build a page that when you visited it sent me various files from your hard disk. Not nice.\nThere are various workarounds at the moment, but they're different between IE and FX (I don't know about Safari, Opera, etc).\nLook at what http://www.gmail.com does in IE and FX when you attach something to an e-mail.\nI want to see that rubbish \"Browse\" button - it tells me that I'm not letting anything unexpected in.\n",
"It is true, the file upload control is definitely behind the times. Hopefully this will be addressed in a future asp.net version.\nThough it costs some money, I have found the Telerik upload control to have all of the functionality that you are looking for, including styling and progress updates (it also optimizes memory for large uploads).\n"
] | [
9,
1,
1,
0
] | [] | [] | [
"forms",
"html",
"http",
"post",
"upload"
] | stackoverflow_0000011974_forms_html_http_post_upload.txt |
Q:
Why do I get the error "Unable to update the password" when calling AzMan?
I'm doing a authorization check from a WinForms application with the help of the AzMan authorization provider from Enterprise Library and am receiving the the following error:
Unable to update the password. The value provided as the current password is incorrect. (Exception from HRESULT: 0x8007052B) (Microsoft.Practices.EnterpriseLibrary.Security.AzMan)
Unable to update the password. The value provided as the current password is incorrect. (Exception from HRESULT: 0x8007052B) (Microsoft.Interop.Security.AzRoles)
The AzMan store is hosted in ADAM on another computer in the same domain. Other computers and users do not have this problem. The user making the call has read access to both ADAM and the AzMan store. The computer running the WinForms app and the computer running ADAM are both on Windows XP SP2.
I've had access problems with AzMan before that I've resolved, but this is a new one... What am I missing?
A:
For AzMan with ASP.NET, turn on impersonation in web.config (<identity impersonate="true" username="xx" pasword="xx" />), and make sure with an AD administrator that the impersonation account has "reader" permissions on the AzMan store; plus, give write permissions to this account on the Temporary ASP.NET Files folder (under C:\Windows\Microsoft.NET\<framework>).
A:
I found out from the event log that there was a security issue with the user making the call to AzMan from a remote computer. The user did not belong the local Users group on the computer running ADAM/AzMan. When I corrected that everything worked again.
| Why do I get the error "Unable to update the password" when calling AzMan? | I'm doing a authorization check from a WinForms application with the help of the AzMan authorization provider from Enterprise Library and am receiving the the following error:
Unable to update the password. The value provided as the current password is incorrect. (Exception from HRESULT: 0x8007052B) (Microsoft.Practices.EnterpriseLibrary.Security.AzMan)
Unable to update the password. The value provided as the current password is incorrect. (Exception from HRESULT: 0x8007052B) (Microsoft.Interop.Security.AzRoles)
The AzMan store is hosted in ADAM on another computer in the same domain. Other computers and users do not have this problem. The user making the call has read access to both ADAM and the AzMan store. The computer running the WinForms app and the computer running ADAM are both on Windows XP SP2.
I've had access problems with AzMan before that I've resolved, but this is a new one... What am I missing?
| [
"For AzMan with ASP.NET, turn on impersonation in web.config (<identity impersonate=\"true\" username=\"xx\" pasword=\"xx\" />), and make sure with an AD administrator that the impersonation account has \"reader\" permissions on the AzMan store; plus, give write permissions to this account on the Temporary ASP.NET Files folder (under C:\\Windows\\Microsoft.NET\\<framework>).\n",
"I found out from the event log that there was a security issue with the user making the call to AzMan from a remote computer. The user did not belong the local Users group on the computer running ADAM/AzMan. When I corrected that everything worked again. \n"
] | [
2,
2
] | [] | [] | [
"adam",
"authorization",
"azman",
"c#",
"winforms"
] | stackoverflow_0000010810_adam_authorization_azman_c#_winforms.txt |
Q:
Configuring VisualSVN Server to use _svn instead of .svn
We were having a problem with our build server not checking out modifications from source control despite recognizing that there had been changes.
It was traced to the control folder (not sure what it's real name is), the existing working builds were using _svn. Clearing the working folder forced a new complete checkout and I noticed that now the control folder is .svn. It looks like originally our integration routines were checking out code using _svn but now it is using .svn.
The svn.exe being used during integration is from VisualSVN Server can I set this up to use _svn again?
How the original working copies were using _svn I don't know! - we only ever ever used VisualSVN Server and haven't changed this.
We had setup TortoiseSVN to use _svn following the recommendation that this works better for Visual Studio and have also installed TortoiseSVN on the build server in case it is ever needed. Could this be the cause?
Also is this really necessary? As MSBuild is Microsoft's is it recommended as it is for Visual Studio?
A:
The business about _svn vs. .svn was an issue with Visual Studio web projects only (and I'm fairly sure it was fixed in VS2005 anyway), it's not a general "_svn works better with VS" thing.
It's also only a working-copy issue, not a repository issue - i.e. it doesn't matter if some users of SVN are using clients set up to do _svn and some are using .svn - the repository won't know or care - (unless somehow you end-up with a load of these _svn/.svn files actually checked-into the repository which would be confusing in the extreme.)
Unless you have absolute concrete evidence that .SVN is causing you problems, then I would stick with that wherever you can.
A:
I've been using .svn with Visual Studio 2008 and 2005 as well as on our CC.Net integration server (with MSBuild) with no issues. I'd stick with the .svn format.
A:
http://subversion.tigris.org/svn_1.3_releasenotes.html
Need to read the "Official support for
Windows '_svn' directories (client and
language bindings)" section
And need to be aware that you're reading documentation which is several years old, a fact which might or might not be pertinent.
A:
As far as I know _svn is needed, because WebApplications have problems when one of their directories begins with a point.
| Configuring VisualSVN Server to use _svn instead of .svn | We were having a problem with our build server not checking out modifications from source control despite recognizing that there had been changes.
It was traced to the control folder (not sure what it's real name is), the existing working builds were using _svn. Clearing the working folder forced a new complete checkout and I noticed that now the control folder is .svn. It looks like originally our integration routines were checking out code using _svn but now it is using .svn.
The svn.exe being used during integration is from VisualSVN Server can I set this up to use _svn again?
How the original working copies were using _svn I don't know! - we only ever ever used VisualSVN Server and haven't changed this.
We had setup TortoiseSVN to use _svn following the recommendation that this works better for Visual Studio and have also installed TortoiseSVN on the build server in case it is ever needed. Could this be the cause?
Also is this really necessary? As MSBuild is Microsoft's is it recommended as it is for Visual Studio?
| [
"The business about _svn vs. .svn was an issue with Visual Studio web projects only (and I'm fairly sure it was fixed in VS2005 anyway), it's not a general \"_svn works better with VS\" thing.\nIt's also only a working-copy issue, not a repository issue - i.e. it doesn't matter if some users of SVN are using clients set up to do _svn and some are using .svn - the repository won't know or care - (unless somehow you end-up with a load of these _svn/.svn files actually checked-into the repository which would be confusing in the extreme.)\nUnless you have absolute concrete evidence that .SVN is causing you problems, then I would stick with that wherever you can.\n",
"I've been using .svn with Visual Studio 2008 and 2005 as well as on our CC.Net integration server (with MSBuild) with no issues. I'd stick with the .svn format.\n",
"\nhttp://subversion.tigris.org/svn_1.3_releasenotes.html\nNeed to read the \"Official support for\n Windows '_svn' directories (client and\n language bindings)\" section\n\nAnd need to be aware that you're reading documentation which is several years old, a fact which might or might not be pertinent.\n",
"As far as I know _svn is needed, because WebApplications have problems when one of their directories begins with a point.\n"
] | [
6,
3,
2,
0
] | [] | [] | [
"svn",
"version_control",
"visualsvn_server"
] | stackoverflow_0000012107_svn_version_control_visualsvn_server.txt |
Q:
Windows Forms Threading and Events - ListBox updates promptly but progressbar experiences huge delay
Our team is creating a new recruitment workflow system to replace an old one. I have been tasked with migrating the old data into the new schema. I have decided to do this by creating a small Windows Forms project as the schema are radically different and straight TSQL scripts are not an adequate solution.
The main sealed class 'ImportController' that does the work declares the following delegate event:
public delegate void ImportProgressEventHandler(object sender, ImportProgressEventArgs e);
public static event ImportProgressEventHandler importProgressEvent;
The main window starts a static method in that class using a new thread:
Thread dataProcessingThread = new Thread(new ParameterizedThreadStart(ImportController.ImportData));
dataProcessingThread.Name = "Data Importer: Data Processing Thread";
dataProcessingThread.Start(settings);
the ImportProgressEvent args carries a string message, a max int value for the progress bar and an current progress int value. The Windows form subcribes to the event:
ImportController.importProgressEvent += new ImportController.ImportProgressEventHandler(ImportController_importProgressEvent);
And responds to the event in this manner using it's own delegate:
private delegate void TaskCompletedUIDelegate(string completedTask, int currentProgress, int progressMax);
private void ImportController_importProgressEvent(object sender, ImportProgressEventArgs e)
{
this.Invoke(new TaskCompletedUIDelegate(this.DisplayCompletedTask), e.CompletedTask, e.CurrentProgress, e.ProgressMax);
}
Finally the progress bar and listbox are updated:
private void DisplayCompletedTask(string completedTask, int currentProgress, int progressMax)
{
string[] items = completedTask.Split(new string[] { Environment.NewLine }, StringSplitOptions.RemoveEmptyEntries);
foreach (string item in items)
{
this.lstTasks.Items.Add(item);
}
if (currentProgress >= 0 && progressMax > 0 && currentProgress <= progressMax)
{
this.ImportProgressBar.Maximum = progressMax;
this.ImportProgressBar.Value = currentProgress;
}
}
The thing is the ListBox seems to update very quickly, but the progress bar never moves until the batch is almost complete anyway ??? what gives ?
A:
Maybe you can try the BackgroundWorker component. It makes threading easier. Examples here:
BackgroundWorker Threads and Supporting Cancel
Using the BackgroundWorker Component in .NET 2 applications
BackgroundWorker Sample
A:
Maybe outside of the scope but, to sometimes its useful to do an Application.DoEvents(); to make the gui parts react to user input, such as pressing the cancel-button on a status bar dialog.
A:
Do you by any chance run Windows Vista? I've noticed the exactly same thing in some work related applications. Somehow, there seem to be a delay when the progress bar "animates".
A:
@John
Thanks for the links.
@Will
There's no gain from threadpooling as I know it will only ever spawn one thread. The use of a thread is purely to have a responsive UI while SQL Server is being pounded with reads and writes. It's certainly not a short lived thread.
Regarding sledge-hammers you're right. But, as it turns out my problem was between screen and chair after all. I seem to have an unusal batch of data that has many many many more foreign key records than the other batches and just happens to get selected early in the process meaning the currentProgress doesn't get ++'d for a good 10 seconds.
@All
Thanks for all your input, it got me thinking, which got me looking elsewhere in the code, which led to my ahaa moment of humility where I prove yet again the error is usually human :)
| Windows Forms Threading and Events - ListBox updates promptly but progressbar experiences huge delay | Our team is creating a new recruitment workflow system to replace an old one. I have been tasked with migrating the old data into the new schema. I have decided to do this by creating a small Windows Forms project as the schema are radically different and straight TSQL scripts are not an adequate solution.
The main sealed class 'ImportController' that does the work declares the following delegate event:
public delegate void ImportProgressEventHandler(object sender, ImportProgressEventArgs e);
public static event ImportProgressEventHandler importProgressEvent;
The main window starts a static method in that class using a new thread:
Thread dataProcessingThread = new Thread(new ParameterizedThreadStart(ImportController.ImportData));
dataProcessingThread.Name = "Data Importer: Data Processing Thread";
dataProcessingThread.Start(settings);
the ImportProgressEvent args carries a string message, a max int value for the progress bar and an current progress int value. The Windows form subcribes to the event:
ImportController.importProgressEvent += new ImportController.ImportProgressEventHandler(ImportController_importProgressEvent);
And responds to the event in this manner using it's own delegate:
private delegate void TaskCompletedUIDelegate(string completedTask, int currentProgress, int progressMax);
private void ImportController_importProgressEvent(object sender, ImportProgressEventArgs e)
{
this.Invoke(new TaskCompletedUIDelegate(this.DisplayCompletedTask), e.CompletedTask, e.CurrentProgress, e.ProgressMax);
}
Finally the progress bar and listbox are updated:
private void DisplayCompletedTask(string completedTask, int currentProgress, int progressMax)
{
string[] items = completedTask.Split(new string[] { Environment.NewLine }, StringSplitOptions.RemoveEmptyEntries);
foreach (string item in items)
{
this.lstTasks.Items.Add(item);
}
if (currentProgress >= 0 && progressMax > 0 && currentProgress <= progressMax)
{
this.ImportProgressBar.Maximum = progressMax;
this.ImportProgressBar.Value = currentProgress;
}
}
The thing is the ListBox seems to update very quickly, but the progress bar never moves until the batch is almost complete anyway ??? what gives ?
| [
"Maybe you can try the BackgroundWorker component. It makes threading easier. Examples here:\n\nBackgroundWorker Threads and Supporting Cancel\nUsing the BackgroundWorker Component in .NET 2 applications\nBackgroundWorker Sample\n\n",
"Maybe outside of the scope but, to sometimes its useful to do an Application.DoEvents(); to make the gui parts react to user input, such as pressing the cancel-button on a status bar dialog.\n",
"Do you by any chance run Windows Vista? I've noticed the exactly same thing in some work related applications. Somehow, there seem to be a delay when the progress bar \"animates\".\n",
"@John\nThanks for the links. \n@Will\nThere's no gain from threadpooling as I know it will only ever spawn one thread. The use of a thread is purely to have a responsive UI while SQL Server is being pounded with reads and writes. It's certainly not a short lived thread.\nRegarding sledge-hammers you're right. But, as it turns out my problem was between screen and chair after all. I seem to have an unusal batch of data that has many many many more foreign key records than the other batches and just happens to get selected early in the process meaning the currentProgress doesn't get ++'d for a good 10 seconds. \n@All\nThanks for all your input, it got me thinking, which got me looking elsewhere in the code, which led to my ahaa moment of humility where I prove yet again the error is usually human :)\n"
] | [
2,
0,
0,
0
] | [
"Are you sure that the UI thread is running freely during all this process? i.e. it's not sitting blocked-up on a Join or some other wait? That's what it looks like to me.\nThe suggestion of using BackgroundWorker is a good one - definitely superior to trying to sledge-hammer your way out of the problem with a load of Refresh/Update calls.\nAnd BackgroundWorker will use a pool thread, which is a friendlier way to behave than creating your own short-lived thread.\n",
"\nThere's no gain from threadpooling as\n I know it will only ever spawn one\n thread. The use of a thread is purely\n to have a responsive UI while SQL\n Server is being pounded with reads and\n writes. It's certainly not a short\n lived thread.\n\nOK, I appreciate that, and glad you found your bug, but have you looked at BackgroundWorker? It does pretty much exactly what you're doing, but in a standardised fashion (i.e. without your own delegates) and without the need to create a new thread - both of which are (perhaps small, but maybe still useful) advantages.\n"
] | [
-1,
-1
] | [
"delegates",
"events",
"forms",
"multithreading",
"windows"
] | stackoverflow_0000012095_delegates_events_forms_multithreading_windows.txt |
Q:
Variable Bindings in WPF
I’m creating a UserControl for a rich TreeView (one that has context menus for renaming nodes, adding child nodes, etc.). I want to be able to use this control to manage or navigate any hierarchical data structures I will create. I currently have it working for any data structure that implements the following interface (the interface need not actually be implemented, however, only the presence of these members is required):
interface ITreeItem
{
string Header { get; set; }
IEnumerable Children { get; }
}
Then in my UserControl, I use templates to bind my tree to the data structure, like so:
<TextBlock x:Name="HeaderTextBlock" Text="{Binding Path=Header}" />
What I would like to do is define the name of each of these members in my RichTreeView, allowing it to adapt to a range of different data structures, like so:
class MyItem
{
string Name { get; set; }
ObservableCollection<MyItem> Items;
}
<uc:RichTreeView ItemSource={Binding Source={StaticResource MyItemsProvider}}
HeaderProperty="Name" ChildrenProperty="Items" />
Is there any way to expose the Path of a binding inside a UserControl as a public property of that UserControl? Is there some other way to go about solving this problem?
A:
Perhaps this might help:
Create a new Binding when you set the HeaderProperty property on the Header dependency property:
Header property is your normal everyday DependencyProperty:
public string Header
{
get { return (string)GetValue(HeaderProperty); }
set { SetValue(HeaderProperty, value); }
}
public static readonly DependencyProperty HeaderProperty =
DependencyProperty.Register("Header", typeof(string), typeof(ownerclass));
and the property of your HeaderProperty works as follows:
public static readonly DependencyProperty HeaderPropertyProperty =
DependencyProperty.Register("HeaderProperty", typeof(string), typeof(ownerclass), new PropertyMetadata(OnHeaderPropertyChanged));
public string HeaderProperty
{
get { return (string)GetValue(HeaderPropertyProperty); }
set { SetValue(HeaderPropertyProperty, value); }
}
public static void OnHeaderPropertyChanged(DependencyObject obj, DependencyPropertyChangedEventArgs args)
{
if (args.NewValue != null)
{
ownerclass c = (ownerclass) obj;
Binding b = new Binding();
b.Path = new PropertyPath(args.NewValue.ToString());
c.SetBinding(ownerclass.HeaderProperty, b);
}
}
HeaderProperty is your normal everyday DependencyProperty, with a method that is invoked as soon as the HeaderProperty changes. So when it changes , it creates a binding on the Header which will bind to the path you set in the HeaderProperty. :)
| Variable Bindings in WPF | I’m creating a UserControl for a rich TreeView (one that has context menus for renaming nodes, adding child nodes, etc.). I want to be able to use this control to manage or navigate any hierarchical data structures I will create. I currently have it working for any data structure that implements the following interface (the interface need not actually be implemented, however, only the presence of these members is required):
interface ITreeItem
{
string Header { get; set; }
IEnumerable Children { get; }
}
Then in my UserControl, I use templates to bind my tree to the data structure, like so:
<TextBlock x:Name="HeaderTextBlock" Text="{Binding Path=Header}" />
What I would like to do is define the name of each of these members in my RichTreeView, allowing it to adapt to a range of different data structures, like so:
class MyItem
{
string Name { get; set; }
ObservableCollection<MyItem> Items;
}
<uc:RichTreeView ItemSource={Binding Source={StaticResource MyItemsProvider}}
HeaderProperty="Name" ChildrenProperty="Items" />
Is there any way to expose the Path of a binding inside a UserControl as a public property of that UserControl? Is there some other way to go about solving this problem?
| [
"Perhaps this might help:\nCreate a new Binding when you set the HeaderProperty property on the Header dependency property:\nHeader property is your normal everyday DependencyProperty:\n public string Header\n {\n get { return (string)GetValue(HeaderProperty); }\n set { SetValue(HeaderProperty, value); }\n }\n\n public static readonly DependencyProperty HeaderProperty =\n DependencyProperty.Register(\"Header\", typeof(string), typeof(ownerclass));\n\nand the property of your HeaderProperty works as follows:\n public static readonly DependencyProperty HeaderPropertyProperty =\n DependencyProperty.Register(\"HeaderProperty\", typeof(string), typeof(ownerclass), new PropertyMetadata(OnHeaderPropertyChanged));\n\n public string HeaderProperty \n {\n get { return (string)GetValue(HeaderPropertyProperty); }\n set { SetValue(HeaderPropertyProperty, value); }\n }\n\n public static void OnHeaderPropertyChanged(DependencyObject obj, DependencyPropertyChangedEventArgs args)\n {\n if (args.NewValue != null)\n {\n ownerclass c = (ownerclass) obj;\n\n Binding b = new Binding();\n b.Path = new PropertyPath(args.NewValue.ToString());\n c.SetBinding(ownerclass.HeaderProperty, b);\n }\n }\n\nHeaderProperty is your normal everyday DependencyProperty, with a method that is invoked as soon as the HeaderProperty changes. So when it changes , it creates a binding on the Header which will bind to the path you set in the HeaderProperty. :)\n"
] | [
2
] | [] | [] | [
"c#",
"data_binding",
"wpf"
] | stackoverflow_0000011516_c#_data_binding_wpf.txt |
Q:
Access to global application settings
A database application that I'm currently working on, stores all sorts of settings in the database. Most of those settings are there to customize certain business rules, but there's also some other stuff in there.
The app contains objects that specifically do a certain task, e.g., a certain complicated calculation. Those non-UI objects are unit-tested, but also need access to lots of those global settings. The way we've implemented this right now, is by giving the objects properties that are filled by the Application Controller at runtime. When testing, we create the objects in the test and fill in values for testing (not from the database).
This works better, in any case much better than having all those objects need some global Settings object --- that of course effectively makes unit testing impossible :) Disadvantage can be that you sometimes need to set a dozen of properties, or that you need to let those properties 'percolate' into sub-objects.
So the general question is: how do you provide access to global application settings in your projects, without the need for global variables, while still being able to unit test your code? This must be a problem that's been solved 100's of times...
(Note: I'm not too much of an experienced programmer, as you'll have noticed; but I love to learn! And of course, I've already done research into this topic, but I'm really looking for some first-hand experiences)
A:
You could use Martin Fowlers ServiceLocator pattern. In php it could look like this:
class ServiceLocator {
private static $soleInstance;
private $globalSettings;
public static function load($locator) {
self::$soleInstance = $locator;
}
public static function globalSettings() {
if (!isset(self::$soleInstance->globalSettings)) {
self::$soleInstance->setGlobalSettings(new GlobalSettings());
}
return self::$soleInstance->globalSettings;
}
}
Your production code then initializes the service locator like this:
ServiceLocator::load(new ServiceLocator());
In your test-code, you insert your mock-settings like this:
ServiceLocator s = new ServiceLocator();
s->setGlobalSettings(new MockGlobalSettings());
ServiceLocator::load(s);
It's a repository for singletons that can be exchanged for testing purposes.
A:
I like to model my configuration access off of the Service Locator pattern. This gives me a single point to get any configuration value that I need and by putting it outside the application in a separate library, it allows reuse and testability. Here is some sample code, I am not sure what language you are using, but I wrote it in C#.
First I create a generic class that will models my ConfigurationItem.
public class ConfigurationItem<T>
{
private T item;
public ConfigurationItem(T item)
{
this.item = item;
}
public T GetValue()
{
return item;
}
}
Then I create a class that exposes public static readonly variables for the configuration item. Here I am just reading the ConnectionStringSettings from a config file, which is just xml. Of course for more items, you can read the values from any source.
public class ConfigurationItems
{
public static ConfigurationItem<ConnectionStringSettings> ConnectionSettings = new ConfigurationItem<ConnectionStringSettings>(RetrieveConnectionString());
private static ConnectionStringSettings RetrieveConnectionString()
{
// In .Net, we store our connection string in the application/web config file.
// We can access those values through the ConfigurationManager class.
return ConfigurationManager.ConnectionStrings[ConfigurationManager.AppSettings["ConnectionKey"]];
}
}
Then when I need a ConfigurationItem for use, I call it like this:
ConfigurationItems.ConnectionSettings.GetValue();
And it will return me a type safe value, which I can then cache or do whatever I want with.
Here's a sample test:
[TestFixture]
public class ConfigurationItemsTest
{
[Test]
public void ShouldBeAbleToAccessConnectionStringSettings()
{
ConnectionStringSettings item = ConfigurationItems.ConnectionSettings.GetValue();
Assert.IsNotNull(item);
}
}
Hope this helps.
A:
Usually this is handled by an ini file or XML configuration file. Then you just have a class that reads the setting when neeed.
.NET has this built in with the ConfigurationManager classes, but it's quite easy to implement, just read text files, or load XML into DOM or parse them by hand in code.
Having config files in the database is ok, but it does tie you to the database, and creates an extra dependancy for your app that ini/xml files solve.
A:
I did this:
public class MySettings
{
public static double Setting1
{ get { return SettingsCache.Instance.GetDouble("Setting1"); } }
public static string Setting2
{ get { return SettingsCache.Instance.GetString("Setting2"); } }
}
I put this in a separate infrastructure module to remove any issues with circular dependencies.
Doing this I am not tied to any specific configuration method, and have no strings running havoc in my applications code.
| Access to global application settings | A database application that I'm currently working on, stores all sorts of settings in the database. Most of those settings are there to customize certain business rules, but there's also some other stuff in there.
The app contains objects that specifically do a certain task, e.g., a certain complicated calculation. Those non-UI objects are unit-tested, but also need access to lots of those global settings. The way we've implemented this right now, is by giving the objects properties that are filled by the Application Controller at runtime. When testing, we create the objects in the test and fill in values for testing (not from the database).
This works better, in any case much better than having all those objects need some global Settings object --- that of course effectively makes unit testing impossible :) Disadvantage can be that you sometimes need to set a dozen of properties, or that you need to let those properties 'percolate' into sub-objects.
So the general question is: how do you provide access to global application settings in your projects, without the need for global variables, while still being able to unit test your code? This must be a problem that's been solved 100's of times...
(Note: I'm not too much of an experienced programmer, as you'll have noticed; but I love to learn! And of course, I've already done research into this topic, but I'm really looking for some first-hand experiences)
| [
"You could use Martin Fowlers ServiceLocator pattern. In php it could look like this:\nclass ServiceLocator {\n private static $soleInstance;\n private $globalSettings;\n\n public static function load($locator) {\n self::$soleInstance = $locator;\n }\n\n public static function globalSettings() {\n if (!isset(self::$soleInstance->globalSettings)) {\n self::$soleInstance->setGlobalSettings(new GlobalSettings());\n }\n return self::$soleInstance->globalSettings;\n }\n}\n\nYour production code then initializes the service locator like this:\nServiceLocator::load(new ServiceLocator());\n\nIn your test-code, you insert your mock-settings like this:\nServiceLocator s = new ServiceLocator();\ns->setGlobalSettings(new MockGlobalSettings());\nServiceLocator::load(s);\n\nIt's a repository for singletons that can be exchanged for testing purposes.\n",
"I like to model my configuration access off of the Service Locator pattern. This gives me a single point to get any configuration value that I need and by putting it outside the application in a separate library, it allows reuse and testability. Here is some sample code, I am not sure what language you are using, but I wrote it in C#.\nFirst I create a generic class that will models my ConfigurationItem.\npublic class ConfigurationItem<T>\n{\n private T item;\n\n public ConfigurationItem(T item)\n {\n this.item = item;\n }\n\n public T GetValue()\n {\n return item;\n }\n}\n\nThen I create a class that exposes public static readonly variables for the configuration item. Here I am just reading the ConnectionStringSettings from a config file, which is just xml. Of course for more items, you can read the values from any source.\npublic class ConfigurationItems\n{\n public static ConfigurationItem<ConnectionStringSettings> ConnectionSettings = new ConfigurationItem<ConnectionStringSettings>(RetrieveConnectionString());\n\n private static ConnectionStringSettings RetrieveConnectionString()\n {\n // In .Net, we store our connection string in the application/web config file.\n // We can access those values through the ConfigurationManager class.\n return ConfigurationManager.ConnectionStrings[ConfigurationManager.AppSettings[\"ConnectionKey\"]];\n }\n}\n\nThen when I need a ConfigurationItem for use, I call it like this:\nConfigurationItems.ConnectionSettings.GetValue();\n\nAnd it will return me a type safe value, which I can then cache or do whatever I want with.\nHere's a sample test:\n[TestFixture]\npublic class ConfigurationItemsTest\n{\n [Test]\n public void ShouldBeAbleToAccessConnectionStringSettings()\n {\n ConnectionStringSettings item = ConfigurationItems.ConnectionSettings.GetValue();\n Assert.IsNotNull(item);\n }\n}\n\nHope this helps.\n",
"Usually this is handled by an ini file or XML configuration file. Then you just have a class that reads the setting when neeed.\n.NET has this built in with the ConfigurationManager classes, but it's quite easy to implement, just read text files, or load XML into DOM or parse them by hand in code.\nHaving config files in the database is ok, but it does tie you to the database, and creates an extra dependancy for your app that ini/xml files solve.\n",
"I did this:\npublic class MySettings\n{\n public static double Setting1\n { get { return SettingsCache.Instance.GetDouble(\"Setting1\"); } }\n\n public static string Setting2\n { get { return SettingsCache.Instance.GetString(\"Setting2\"); } }\n}\n\nI put this in a separate infrastructure module to remove any issues with circular dependencies.\nDoing this I am not tied to any specific configuration method, and have no strings running havoc in my applications code.\n"
] | [
1,
1,
0,
0
] | [] | [] | [
"language_agnostic",
"oop"
] | stackoverflow_0000012140_language_agnostic_oop.txt |
Q:
Namespace/solution structure
I apologize for asking such a generalized question, but it's something that can prove challenging for me. My team is about to embark on a large project that will hopefully drag together all of the random one-off codebases that have evolved through the years. Given that this project will cover standardizing logical entities across the company ("Customer", "Employee"), small tasks, large tasks that control the small tasks, and utility services, I'm struggling to figure out the best way to structure the namespaces and code structure.
Though I guess I'm not giving you enough specifics to go on, do you have any resources or advice on how to approach splitting your domains up logically? In case it helps, most of this functionality will be revealed via web services, and we're a Microsoft shop with all the latest gizmos and gadgets.
I'm debating one massive solution with subprojects to make references easier, but will that make it too unwieldy?
Should I wrap up legacy application functionality, or leave that completely agnostic in the namespace (making an OurCRMProduct.Customer class versus a generic Customer class, for instance)?
Should each service/project have its own BAL and DAL, or should that be an entirely separate assembly that everything references?
I don't have experience with organizing such far-reaching projects, only one-offs, so I'm looking for any guidance I can get.
A:
There's a million ways to skin a cat. However, the simplest one is always the best. Which way is the simplest for you? Depends on your requirements. But there are some general rules of thumb I follow.
First, reduce the overall number of projects as much as possible. When you compile twenty times a day, that extra minute adds up.
If your app is designed for extensibility, consider splitting your assemblies along the lines of design vs. implementation. Place your interfaces and base classes in a public assembly. Create an assembly for your company's implementations of these classes.
For large applications, keep your UI logic and business logic separate.
SIMPLIFY your solution. If it looks too complex, it probably is. Combine, reduce.
A:
My advice, having embarked on a similar undertaking, is to not agonize over the name spaces..
Just start developing with a few important loose guidelines, because however you start out, your project is organic, and you will end up reorganizing the name spaces and classes over time.
Don't waste time talking too much about your project. Just do it.
A:
I recently experienced the exact same at work. Lots of ad-hoc code that needed to be structured and organised.
Its real hard at first, since there is so much. I think the best advice I could give is to just invest time in it on the wind down on a Friday afternoon, for a couple of weeks I would just pick an app/chunk of code, examine what was there, think about what we could make generic, copy it, put it into the new library wherever I thought it should be. Once I had all the code within an application migrated, I would then work on refactoring the application to work from the common framework.. This sometimes caused problems that needed to be fixed, but so long as your thorough it shouldn't be too big a deal.
Piece by piece, thats the only way to do it.
In terms of structure, I tried to kind of mimic the MS namespacing since for the most part its pretty logical (e.g. Company.Data , Company.Web , Company.Web.UI and so on.
One of the major benefits is probably the amount of code dupe removed. Yeah a little refactoring was required in the apps, but the code base is a lot leaner, and in many ways "smarter".
Another thing I noticed is that I would often have problems trying to figure out where to put stuff (in terms of namespacing) since I wasn't sure what it belonged to. Now this really concerned me, I viewed it as such a bad smell. Since the re-org everything now falls in to space much more nicely. And with the (now very small amount) of application specfic code, they get put into Company.Applications.ApplicationName This helps me really think about business objects a lot more since I dont want too much within this namespace, so I come up with more flexible designs.
Sorry for the long post.. It's kind of rambling!
A:
We name the assemblies in .NET the following way Company.Project.XXXX.YYYY where XXXX is Project and YYYYY is subproject, for example:
LCP.AdmCom.Common
LCP.AdmCom.BusinessObjects
LCP.AdmCom.Common.Dal
We take this from a book call Framework Design Guidelines by Krzysztof Cwalina (Author), Brad Abrams (Author)
A:
For large projects the approach I like to take is to have one Domain namespace for my business objects and then use Data Transfer Objects (DTO's) in my layers where storage and retrieval of the business object is needed. A DTO is a simple object that doesn't contain any business logic.
Here is a link that explains a DTO:
http://martinfowler.com/eaaCatalog/dataTransferObject.html
A:
Large solutions with lots of projects can be quite slow to compile, but are easier to manage together.
I often have Unit test assemblies in the same solution as the ones they're testing, as you tend to make changes to them together.
| Namespace/solution structure | I apologize for asking such a generalized question, but it's something that can prove challenging for me. My team is about to embark on a large project that will hopefully drag together all of the random one-off codebases that have evolved through the years. Given that this project will cover standardizing logical entities across the company ("Customer", "Employee"), small tasks, large tasks that control the small tasks, and utility services, I'm struggling to figure out the best way to structure the namespaces and code structure.
Though I guess I'm not giving you enough specifics to go on, do you have any resources or advice on how to approach splitting your domains up logically? In case it helps, most of this functionality will be revealed via web services, and we're a Microsoft shop with all the latest gizmos and gadgets.
I'm debating one massive solution with subprojects to make references easier, but will that make it too unwieldy?
Should I wrap up legacy application functionality, or leave that completely agnostic in the namespace (making an OurCRMProduct.Customer class versus a generic Customer class, for instance)?
Should each service/project have its own BAL and DAL, or should that be an entirely separate assembly that everything references?
I don't have experience with organizing such far-reaching projects, only one-offs, so I'm looking for any guidance I can get.
| [
"There's a million ways to skin a cat. However, the simplest one is always the best. Which way is the simplest for you? Depends on your requirements. But there are some general rules of thumb I follow.\nFirst, reduce the overall number of projects as much as possible. When you compile twenty times a day, that extra minute adds up. \nIf your app is designed for extensibility, consider splitting your assemblies along the lines of design vs. implementation. Place your interfaces and base classes in a public assembly. Create an assembly for your company's implementations of these classes. \nFor large applications, keep your UI logic and business logic separate. \nSIMPLIFY your solution. If it looks too complex, it probably is. Combine, reduce.\n",
"My advice, having embarked on a similar undertaking, is to not agonize over the name spaces..\nJust start developing with a few important loose guidelines, because however you start out, your project is organic, and you will end up reorganizing the name spaces and classes over time.\nDon't waste time talking too much about your project. Just do it.\n",
"I recently experienced the exact same at work. Lots of ad-hoc code that needed to be structured and organised.\nIts real hard at first, since there is so much. I think the best advice I could give is to just invest time in it on the wind down on a Friday afternoon, for a couple of weeks I would just pick an app/chunk of code, examine what was there, think about what we could make generic, copy it, put it into the new library wherever I thought it should be. Once I had all the code within an application migrated, I would then work on refactoring the application to work from the common framework.. This sometimes caused problems that needed to be fixed, but so long as your thorough it shouldn't be too big a deal.\nPiece by piece, thats the only way to do it.\nIn terms of structure, I tried to kind of mimic the MS namespacing since for the most part its pretty logical (e.g. Company.Data , Company.Web , Company.Web.UI and so on.\nOne of the major benefits is probably the amount of code dupe removed. Yeah a little refactoring was required in the apps, but the code base is a lot leaner, and in many ways \"smarter\".\nAnother thing I noticed is that I would often have problems trying to figure out where to put stuff (in terms of namespacing) since I wasn't sure what it belonged to. Now this really concerned me, I viewed it as such a bad smell. Since the re-org everything now falls in to space much more nicely. And with the (now very small amount) of application specfic code, they get put into Company.Applications.ApplicationName This helps me really think about business objects a lot more since I dont want too much within this namespace, so I come up with more flexible designs.\nSorry for the long post.. It's kind of rambling!\n",
"We name the assemblies in .NET the following way Company.Project.XXXX.YYYY where XXXX is Project and YYYYY is subproject, for example:\n\nLCP.AdmCom.Common\nLCP.AdmCom.BusinessObjects\nLCP.AdmCom.Common.Dal\n\nWe take this from a book call Framework Design Guidelines by Krzysztof Cwalina (Author), Brad Abrams (Author)\n",
"For large projects the approach I like to take is to have one Domain namespace for my business objects and then use Data Transfer Objects (DTO's) in my layers where storage and retrieval of the business object is needed. A DTO is a simple object that doesn't contain any business logic.\nHere is a link that explains a DTO:\nhttp://martinfowler.com/eaaCatalog/dataTransferObject.html\n",
"Large solutions with lots of projects can be quite slow to compile, but are easier to manage together.\nI often have Unit test assemblies in the same solution as the ones they're testing, as you tend to make changes to them together.\n"
] | [
12,
7,
4,
4,
3,
0
] | [] | [] | [
"architecture",
"legacy",
"module",
"namespaces"
] | stackoverflow_0000012243_architecture_legacy_module_namespaces.txt |
Q:
How do I restyle an Adobe Flex Accordion to include a button in each canvas header?
Here is the sample code for my accordion:
<mx:Accordion x="15" y="15" width="230" height="599" styleName="myAccordion">
<mx:Canvas id="pnlSpotlight" label="SPOTLIGHT" height="100%" width="100%" horizontalScrollPolicy="off">
<mx:VBox width="100%" height="80%" paddingTop="2" paddingBottom="1" verticalGap="1">
<mx:Repeater id="rptrSpotlight" dataProvider="{aSpotlight}">
<sm:SmallCourseListItem
viewClick="PlayFile(event.currentTarget.getRepeaterItem().fileID);"
Description="{rptrSpotlight.currentItem.fileDescription}"
FileID = "{rptrSpotlight.currentItem.fileID}"
detailsClick="{detailsView.SetFile(event.currentTarget.getRepeaterItem().fileID,this)}"
Title="{rptrSpotlight.currentItem.fileTitle}"
FileIcon="{iconLibrary.getIcon(rptrSpotlight.currentItem.fileExtension)}" />
</mx:Repeater>
</mx:VBox>
</mx:Canvas>
</mx:Accordion>
I would like to include a button in each header like so:
A:
Thanks, I got it working using FlexLib's CanvasButtonAccordionHeader.
A:
You will have to create a custom header renderer, add a button to it and position it manually. Try something like this:
<mx:Accordion>
<mx:headerRenderer>
<mx:Component>
<AccordionHeader xmlns="mx.containers.accordionClasses.*">
<mx:Script>
<![CDATA[
import mx.controls.Button;
private var extraButton : Button;
override protected function createChildren( ) : void {
super.createChildren();
if ( extraButton == null ) {
extraButton = new Button();
addChild(extraButton);
}
}
override protected function updateDisplayList( unscaledWidth : Number, unscaledHeight : Number ) : void {
super.updateDisplayList(unscaledWidth, unscaledHeight);
extraButton.setActualSize(unscaledHeight - 6, unscaledHeight - 6);
extraButton.move(unscaledWidth - extraButton.width - 3, (unscaledHeight - extraButton.height)/2);
}
]]>
</mx:Script>
</AccordionHeader>
</mx:Component>
</mx:headerRenderer>
<mx:HBox label="1"><Label text="Text 1"/></HBox>
<mx:HBox label="1"><Label text="Text 2"/></HBox>
<mx:HBox label="1"><Label text="Text 3"/></HBox>
</mx:Accordion>
| How do I restyle an Adobe Flex Accordion to include a button in each canvas header? | Here is the sample code for my accordion:
<mx:Accordion x="15" y="15" width="230" height="599" styleName="myAccordion">
<mx:Canvas id="pnlSpotlight" label="SPOTLIGHT" height="100%" width="100%" horizontalScrollPolicy="off">
<mx:VBox width="100%" height="80%" paddingTop="2" paddingBottom="1" verticalGap="1">
<mx:Repeater id="rptrSpotlight" dataProvider="{aSpotlight}">
<sm:SmallCourseListItem
viewClick="PlayFile(event.currentTarget.getRepeaterItem().fileID);"
Description="{rptrSpotlight.currentItem.fileDescription}"
FileID = "{rptrSpotlight.currentItem.fileID}"
detailsClick="{detailsView.SetFile(event.currentTarget.getRepeaterItem().fileID,this)}"
Title="{rptrSpotlight.currentItem.fileTitle}"
FileIcon="{iconLibrary.getIcon(rptrSpotlight.currentItem.fileExtension)}" />
</mx:Repeater>
</mx:VBox>
</mx:Canvas>
</mx:Accordion>
I would like to include a button in each header like so:
| [
"Thanks, I got it working using FlexLib's CanvasButtonAccordionHeader.\n",
"You will have to create a custom header renderer, add a button to it and position it manually. Try something like this:\n<mx:Accordion>\n <mx:headerRenderer>\n <mx:Component>\n <AccordionHeader xmlns=\"mx.containers.accordionClasses.*\">\n <mx:Script>\n <![CDATA[\n\n import mx.controls.Button;\n\n\n private var extraButton : Button;\n\n\n override protected function createChildren( ) : void {\n super.createChildren();\n\n if ( extraButton == null ) {\n extraButton = new Button();\n\n addChild(extraButton);\n }\n }\n\n override protected function updateDisplayList( unscaledWidth : Number, unscaledHeight : Number ) : void {\n super.updateDisplayList(unscaledWidth, unscaledHeight);\n\n extraButton.setActualSize(unscaledHeight - 6, unscaledHeight - 6);\n extraButton.move(unscaledWidth - extraButton.width - 3, (unscaledHeight - extraButton.height)/2);\n }\n\n ]]>\n </mx:Script>\n </AccordionHeader>\n </mx:Component>\n </mx:headerRenderer>\n\n <mx:HBox label=\"1\"><Label text=\"Text 1\"/></HBox>\n <mx:HBox label=\"1\"><Label text=\"Text 2\"/></HBox>\n <mx:HBox label=\"1\"><Label text=\"Text 3\"/></HBox>\n</mx:Accordion>\n\n"
] | [
2,
1
] | [] | [] | [
"actionscript_3",
"apache_flex"
] | stackoverflow_0000011665_actionscript_3_apache_flex.txt |
Q:
How can I remove nodes from a SiteMapNodeCollection?
I've got a Repeater that lists all the web.sitemap child pages on an ASP.NET page. Its DataSource is a SiteMapNodeCollection. But, I don't want my registration form page to show up there.
Dim Children As SiteMapNodeCollection = SiteMap.CurrentNode.ChildNodes
'remove registration page from collection
For Each n As SiteMapNode In SiteMap.CurrentNode.ChildNodes
If n.Url = "/Registration.aspx" Then
Children.Remove(n)
End If
Next
RepeaterSubordinatePages.DataSource = Children
The SiteMapNodeCollection.Remove() method throws a
NotSupportedException: "Collection is read-only".
How can I remove the node from the collection before DataBinding the Repeater?
A:
Using Linq and .Net 3.5:
//this will now be an enumeration, rather than a read only collection
Dim children = SiteMap.CurrentNode.ChildNodes.Where( _
Function (x) x.Url <> "/Registration.aspx" )
RepeaterSubordinatePages.DataSource = children
Without Linq, but using .Net 2:
Function IsShown( n as SiteMapNode ) as Boolean
Return n.Url <> "/Registration.aspx"
End Function
...
//get a generic list
Dim children as List(Of SiteMapNode) = _
New List(Of SiteMapNode) ( SiteMap.CurrentNode.ChildNodes )
//use the generic list's FindAll method
RepeaterSubordinatePages.DataSource = children.FindAll( IsShown )
Avoid removing items from collections as that's always slow. Unless you're going to be looping through multiple times you're better off filtering.
A:
Your shouldn't need CType
Dim children = _
From n In SiteMap.CurrentNode.ChildNodes.Cast(Of SiteMapNode)() _
Where n.Url <> "/Registration.aspx" _
Select n
A:
I got it to work with code below:
Dim children = From n In SiteMap.CurrentNode.ChildNodes _
Where CType(n, SiteMapNode).Url <> "/Registration.aspx" _
Select n
RepeaterSubordinatePages.DataSource = children
Is there a better way where I don't have to use the CType()?
Also, this sets children to a System.Collections.Generic.IEnumerable(Of Object). Is there a good way to get back something more strongly typed like a System.Collections.Generic.IEnumerable(Of System.Web.SiteMapNode) or even better a System.Web.SiteMapNodeCollection?
| How can I remove nodes from a SiteMapNodeCollection? | I've got a Repeater that lists all the web.sitemap child pages on an ASP.NET page. Its DataSource is a SiteMapNodeCollection. But, I don't want my registration form page to show up there.
Dim Children As SiteMapNodeCollection = SiteMap.CurrentNode.ChildNodes
'remove registration page from collection
For Each n As SiteMapNode In SiteMap.CurrentNode.ChildNodes
If n.Url = "/Registration.aspx" Then
Children.Remove(n)
End If
Next
RepeaterSubordinatePages.DataSource = Children
The SiteMapNodeCollection.Remove() method throws a
NotSupportedException: "Collection is read-only".
How can I remove the node from the collection before DataBinding the Repeater?
| [
"Using Linq and .Net 3.5:\n//this will now be an enumeration, rather than a read only collection\nDim children = SiteMap.CurrentNode.ChildNodes.Where( _\n Function (x) x.Url <> \"/Registration.aspx\" )\n\nRepeaterSubordinatePages.DataSource = children \n\nWithout Linq, but using .Net 2:\nFunction IsShown( n as SiteMapNode ) as Boolean\n Return n.Url <> \"/Registration.aspx\"\nEnd Function\n\n...\n\n//get a generic list\nDim children as List(Of SiteMapNode) = _\n New List(Of SiteMapNode) ( SiteMap.CurrentNode.ChildNodes )\n\n//use the generic list's FindAll method\nRepeaterSubordinatePages.DataSource = children.FindAll( IsShown )\n\nAvoid removing items from collections as that's always slow. Unless you're going to be looping through multiple times you're better off filtering.\n",
"Your shouldn't need CType\nDim children = _\n From n In SiteMap.CurrentNode.ChildNodes.Cast(Of SiteMapNode)() _\n Where n.Url <> \"/Registration.aspx\" _\n Select n\n\n",
"I got it to work with code below:\nDim children = From n In SiteMap.CurrentNode.ChildNodes _\n Where CType(n, SiteMapNode).Url <> \"/Registration.aspx\" _\n Select n\nRepeaterSubordinatePages.DataSource = children\n\nIs there a better way where I don't have to use the CType()?\nAlso, this sets children to a System.Collections.Generic.IEnumerable(Of Object). Is there a good way to get back something more strongly typed like a System.Collections.Generic.IEnumerable(Of System.Web.SiteMapNode) or even better a System.Web.SiteMapNodeCollection?\n"
] | [
1,
1,
0
] | [] | [] | [
".net",
"asp.net",
"repeater",
"sitemap",
"vb.net"
] | stackoverflow_0000012297_.net_asp.net_repeater_sitemap_vb.net.txt |
Q:
Reference that lists available JavaScript events?
I'm aware of things like onchange, onmousedown and onmouseup but is there a good reference somewhere that lists all of them complete with possibly a list of the elements that they cover?
A:
W3Schools seems to have a good Javascript events reference: HTML DOM Events
A:
Quirksmode has a nice event-compatibility table and an introduction.
A:
Here is a pretty good JavaScript event reference with the elements they are for:
JavaScript Tutorial >> JavaScript Events
A:
This Javascript Cheat Sheet has a complete list of of event handlers. Nearly all of them can be used on any html element except for one or two.
If you want to use a lightweight javascript library, DOMAssistant is very lightweight and allows you to add events to elements very easily. Like so:
$("#navigation a").addEvent("click", myFunc);
A:
If you're going to be working with events (setting custom functions and event handlers), then I'd recommend checking out the jQuery library. It makes event binding so much easier than doing it by hand.
| Reference that lists available JavaScript events? | I'm aware of things like onchange, onmousedown and onmouseup but is there a good reference somewhere that lists all of them complete with possibly a list of the elements that they cover?
| [
"W3Schools seems to have a good Javascript events reference: HTML DOM Events\n",
"Quirksmode has a nice event-compatibility table and an introduction.\n",
"Here is a pretty good JavaScript event reference with the elements they are for:\nJavaScript Tutorial >> JavaScript Events\n",
"This Javascript Cheat Sheet has a complete list of of event handlers. Nearly all of them can be used on any html element except for one or two.\nIf you want to use a lightweight javascript library, DOMAssistant is very lightweight and allows you to add events to elements very easily. Like so:\n$(\"#navigation a\").addEvent(\"click\", myFunc);\n\n",
"If you're going to be working with events (setting custom functions and event handlers), then I'd recommend checking out the jQuery library. It makes event binding so much easier than doing it by hand.\n"
] | [
11,
11,
4,
4,
2
] | [] | [] | [
"browser",
"client_side",
"javascript"
] | stackoverflow_0000006661_browser_client_side_javascript.txt |
Q:
Automate adding entries to a wiki
Once I have my renamed files I need to add them to my project's wiki page. This is a fairly repetitive manual task, so I guess I could script it but I don't know where to start.
The process is:
Got to appropriate page on the wiki
for each team member (DeveloperA, DeveloperB, DeveloperC)
{
for each of two files ('*_current.jpg', '*_lastweek.jpg')
{
Select 'Attach' link on page
Select the 'manage' link next to the file to be updated
Click 'Browse' button
Browse to the relevant file (which has the same name as the previous version)
Click 'Upload file' button
}
}
Not necessarily looking for the full solution as I'd like to give it a go myself.
Where to begin? What language could I use to do this and how difficult would it be?
A:
Check if the wiki you mean to talk to supports XMLRPC, because if it does it should be a snap. I wrote a tool called WikiUp to solve a similar problem (updating a delineated section on a wiki page).
A:
If you're writing in C#, the WebClient classes might be a good place to start. I bet people could give more specific advice if you mentioned which wiki platform you are using, and whether it requires authentication, though.
I'd probably start by downloading fiddler and watching the http requests from doing it manually. Then you could use some simple scripts and regexes to build your http requests for automating the process.
Of course, if your wildly lucky, your wiki would have a backend simple enough that you could just plug them into its db directly. :)
A:
You might find CoScripter useful -- it's a Firefox extension that allows you to automate tasks you perform on websites. I'm not certain how you'd integrate this with the list of files you're changing on your local system, but it can certainly handle the file uploading through a web form.
Better bet is probably using cURL or a similar HTTP library with your programming language of choice. If you're on *nix, you can use the cURL commandline program inside your shell script to get this done fairly easily. (Like @jsight said you will need to analyze the actual forms you're using on the webpage, using Fiddler or just looking at the form elements and re-creating the POST through cURL.)
| Automate adding entries to a wiki | Once I have my renamed files I need to add them to my project's wiki page. This is a fairly repetitive manual task, so I guess I could script it but I don't know where to start.
The process is:
Got to appropriate page on the wiki
for each team member (DeveloperA, DeveloperB, DeveloperC)
{
for each of two files ('*_current.jpg', '*_lastweek.jpg')
{
Select 'Attach' link on page
Select the 'manage' link next to the file to be updated
Click 'Browse' button
Browse to the relevant file (which has the same name as the previous version)
Click 'Upload file' button
}
}
Not necessarily looking for the full solution as I'd like to give it a go myself.
Where to begin? What language could I use to do this and how difficult would it be?
| [
"Check if the wiki you mean to talk to supports XMLRPC, because if it does it should be a snap. I wrote a tool called WikiUp to solve a similar problem (updating a delineated section on a wiki page). \n",
"If you're writing in C#, the WebClient classes might be a good place to start. I bet people could give more specific advice if you mentioned which wiki platform you are using, and whether it requires authentication, though.\nI'd probably start by downloading fiddler and watching the http requests from doing it manually. Then you could use some simple scripts and regexes to build your http requests for automating the process.\nOf course, if your wildly lucky, your wiki would have a backend simple enough that you could just plug them into its db directly. :)\n",
"You might find CoScripter useful -- it's a Firefox extension that allows you to automate tasks you perform on websites. I'm not certain how you'd integrate this with the list of files you're changing on your local system, but it can certainly handle the file uploading through a web form.\nBetter bet is probably using cURL or a similar HTTP library with your programming language of choice. If you're on *nix, you can use the cURL commandline program inside your shell script to get this done fairly easily. (Like @jsight said you will need to analyze the actual forms you're using on the webpage, using Fiddler or just looking at the form elements and re-creating the POST through cURL.)\n"
] | [
2,
1,
1
] | [] | [] | [
"automation",
"scripting",
"webautomation"
] | stackoverflow_0000012428_automation_scripting_webautomation.txt |
Q:
Creating Visual Studio templates under the "Windows" category.
I have created a template for Visual Studio 2008 and it currently shows up under File->New Project->Visual C#. However, it is only really specific to Visual C#/Windows but I can't work out how to get it to show up under the "Windows" category and not the more general "Visual C#".
A:
Check out MSDN "How to: Locate and Organize Project and Item Templates"
Create a folder within one of these
<VisualStudioInstallDir>\Common7\IDE\ItemTemplates\CSharp\
My Documents\Visual Studio 2008\Templates\ProjectTemplates\CSharp\
A:
Categorization of templates depends on settings (for example, if you choose "C#" settings, all of a sudden all other languages move to an "other languages" tree).
What folder is your template in?
| Creating Visual Studio templates under the "Windows" category. | I have created a template for Visual Studio 2008 and it currently shows up under File->New Project->Visual C#. However, it is only really specific to Visual C#/Windows but I can't work out how to get it to show up under the "Windows" category and not the more general "Visual C#".
| [
"Check out MSDN \"How to: Locate and Organize Project and Item Templates\"\nCreate a folder within one of these\n<VisualStudioInstallDir>\\Common7\\IDE\\ItemTemplates\\CSharp\\\nMy Documents\\Visual Studio 2008\\Templates\\ProjectTemplates\\CSharp\\\n\n",
"Categorization of templates depends on settings (for example, if you choose \"C#\" settings, all of a sudden all other languages move to an \"other languages\" tree).\nWhat folder is your template in?\n"
] | [
5,
0
] | [] | [] | [
"templates",
"visual_studio"
] | stackoverflow_0000012271_templates_visual_studio.txt |
Q:
Alternative Hostname for an IIS web site for internal access only
I'm using IIS in Windows 2003 Server for a SharePoint intranet. External incoming requests will be using the host header portal.mycompany.com and be forced to use SSL.
I was wondering if there's a way to set up an alternate host header such as http://internalportal/ which only accepts requests from the internal network, but doesn't force the users to use SSL.
Any recommendations for how to set this up?
A:
Daniel, keep in mind that just because something is possbile in IIS, and via any number of off box solutions (like hardware load balancers and SSL) doesn't mean that it is supported by SharePoint, or that it is implemented in the same way.
You can do what you are asking for, however you should do it via SharePoint Central Administration, and "Create or Extend a Web Application" and then "Extend and Existing Application".
In this way you can create a new web site (in IIS) for accessing your existing SharePoint Web Application, one that can be accessed via a different hostheader, port, using SSL, Authentication mechanism, etc.
As a general rule, if you can do something in IIS AND in SharePoint, you should do it only in SharePoint.
A:
Assuming that http://internalportal/ wasn't accessible from outside the company, you could set up two websites in IIS. The first site, configured to use a host header value of 'portal.mycompany.com', would require SSL. The second site, configured to use a host header value of 'internalportal', would not require SSL. The host header value is configured under 'Web Site' -> 'Advanced'.
Having a hardware load balancer makes things much easier. The site on the load balancer is set up to require SSL, and your websites in IIS are setup not to require SSL.
A:
You could just add a second host header and internal IP address to the site for internal non-ssl access
172.16.3.1:443:portal.mycompany.com
172.16.3.2:80:internalportal
| Alternative Hostname for an IIS web site for internal access only | I'm using IIS in Windows 2003 Server for a SharePoint intranet. External incoming requests will be using the host header portal.mycompany.com and be forced to use SSL.
I was wondering if there's a way to set up an alternate host header such as http://internalportal/ which only accepts requests from the internal network, but doesn't force the users to use SSL.
Any recommendations for how to set this up?
| [
"Daniel, keep in mind that just because something is possbile in IIS, and via any number of off box solutions (like hardware load balancers and SSL) doesn't mean that it is supported by SharePoint, or that it is implemented in the same way.\nYou can do what you are asking for, however you should do it via SharePoint Central Administration, and \"Create or Extend a Web Application\" and then \"Extend and Existing Application\". \nIn this way you can create a new web site (in IIS) for accessing your existing SharePoint Web Application, one that can be accessed via a different hostheader, port, using SSL, Authentication mechanism, etc. \nAs a general rule, if you can do something in IIS AND in SharePoint, you should do it only in SharePoint.\n",
"Assuming that http://internalportal/ wasn't accessible from outside the company, you could set up two websites in IIS. The first site, configured to use a host header value of 'portal.mycompany.com', would require SSL. The second site, configured to use a host header value of 'internalportal', would not require SSL. The host header value is configured under 'Web Site' -> 'Advanced'.\nHaving a hardware load balancer makes things much easier. The site on the load balancer is set up to require SSL, and your websites in IIS are setup not to require SSL.\n",
"You could just add a second host header and internal IP address to the site for internal non-ssl access\n172.16.3.1:443:portal.mycompany.com\n172.16.3.2:80:internalportal\n\n"
] | [
3,
1,
1
] | [] | [] | [
"iis",
"moss",
"sharepoint",
"wss"
] | stackoverflow_0000005494_iis_moss_sharepoint_wss.txt |
Q:
How do you convert the number you get from datepart to the name of the day?
Is there a quick one-liner to call datepart in Sql Server and get back the name of the day instead of just the number?
select datepart(dw, getdate());
This will return 1-7, with Sunday being 1. I would like 'Sunday' instead of 1.
A:
select datename(weekday, getdate());
A:
It actually took me more searching than I thought it would to find this answer. It's funny how you can use a technology for ages and never know about simple functions like this.
select datename(dw, getdate())
I'm not sure how localization would work with this function. Getting the name client-side is probably the answer, but it would be nice to do it on the database. Would Sql Server use the collation setting to determine the output for this?
A:
This is not possible without using the result to select the day yourself. For one thing the textual representation of the day is locale-dependent. For another the returned value depends upon the 'datefirst' setting.
A:
If you want a localizable solution, just join the result against a table with the names and numbers.
| How do you convert the number you get from datepart to the name of the day? | Is there a quick one-liner to call datepart in Sql Server and get back the name of the day instead of just the number?
select datepart(dw, getdate());
This will return 1-7, with Sunday being 1. I would like 'Sunday' instead of 1.
| [
"select datename(weekday, getdate());\n\n",
"It actually took me more searching than I thought it would to find this answer. It's funny how you can use a technology for ages and never know about simple functions like this.\nselect datename(dw, getdate())\n\nI'm not sure how localization would work with this function. Getting the name client-side is probably the answer, but it would be nice to do it on the database. Would Sql Server use the collation setting to determine the output for this?\n",
"This is not possible without using the result to select the day yourself. For one thing the textual representation of the day is locale-dependent. For another the returned value depends upon the 'datefirst' setting.\n",
"If you want a localizable solution, just join the result against a table with the names and numbers.\n"
] | [
12,
1,
0,
0
] | [] | [] | [
"datepart",
"sql_server"
] | stackoverflow_0000012533_datepart_sql_server.txt |
Q:
Why do Sql Server 2005 maintenance plans use the wrong database for dbcc checkdb?
This is a problem I have seen other people besides myself having, and I haven't found a good explanation.
Let's say you have a maintenance plan with a task to check the database, something like this:
USE [MyDb]
GO
DBCC CHECKDB with no_infomsgs, all_errormsgs
If you go look in your logs after the task executes, you might see something like this:
08/15/2008 06:00:22,spid55,Unknown,DBCC CHECKDB (mssqlsystemresource) executed by NT AUTHORITY\SYSTEM found 0 errors and repaired 0 errors. Elapsed time: 0 hours 0 minutes 0 seconds.
08/15/2008 06:00:21,spid55,Unknown,DBCC CHECKDB (master) executed by NT AUTHORITY\SYSTEM found 0 errors and repaired 0 errors. Elapsed time: 0 hours 0 minutes 0 seconds.
Instead of checking MyDb, it checked master and msssqlsystemresource.
Why?
My workaround is to create a Sql Server Agent Job with this:
dbcc checkdb ('MyDb') with no_infomsgs, all_errormsgs;
That always works fine.
08/15/2008 04:26:04,spid54,Unknown,DBCC CHECKDB (MyDb) WITH all_errormsgs<c/> no_infomsgs executed by NT AUTHORITY\SYSTEM found 0 errors and repaired 0 errors. Elapsed time: 0 hours 26 minutes 3 seconds.
A:
For starters, always remember that GO is not a SQL keyword; it is merely a batch separator that is (generally) implemented/recognized by the client, not the server. So, depending on context and client, there really is no guarantee that the current database is preserved between batches.
A:
If you are using a maintenance plan you'd probably be better off use the check database integrity task. If you really want to run you own maintenance written in t-sql then run it using a step in a job, not in a maintenance plan and the code above will work ok. Like Stu said the GO statement is client directive not a sql keyword and only seems to be respected by isql, wsql, osql, etc, clients and the sql agent. I think it works in DTS packages. Obviously, not in DTSX, though.
A:
You have a check datasbase integrity task and you double-clicked it choose MyDb and when the plan runs it only checks master?? weird. Are you sure you don't another plan running?
| Why do Sql Server 2005 maintenance plans use the wrong database for dbcc checkdb? | This is a problem I have seen other people besides myself having, and I haven't found a good explanation.
Let's say you have a maintenance plan with a task to check the database, something like this:
USE [MyDb]
GO
DBCC CHECKDB with no_infomsgs, all_errormsgs
If you go look in your logs after the task executes, you might see something like this:
08/15/2008 06:00:22,spid55,Unknown,DBCC CHECKDB (mssqlsystemresource) executed by NT AUTHORITY\SYSTEM found 0 errors and repaired 0 errors. Elapsed time: 0 hours 0 minutes 0 seconds.
08/15/2008 06:00:21,spid55,Unknown,DBCC CHECKDB (master) executed by NT AUTHORITY\SYSTEM found 0 errors and repaired 0 errors. Elapsed time: 0 hours 0 minutes 0 seconds.
Instead of checking MyDb, it checked master and msssqlsystemresource.
Why?
My workaround is to create a Sql Server Agent Job with this:
dbcc checkdb ('MyDb') with no_infomsgs, all_errormsgs;
That always works fine.
08/15/2008 04:26:04,spid54,Unknown,DBCC CHECKDB (MyDb) WITH all_errormsgs<c/> no_infomsgs executed by NT AUTHORITY\SYSTEM found 0 errors and repaired 0 errors. Elapsed time: 0 hours 26 minutes 3 seconds.
| [
"For starters, always remember that GO is not a SQL keyword; it is merely a batch separator that is (generally) implemented/recognized by the client, not the server. So, depending on context and client, there really is no guarantee that the current database is preserved between batches.\n",
"If you are using a maintenance plan you'd probably be better off use the check database integrity task. If you really want to run you own maintenance written in t-sql then run it using a step in a job, not in a maintenance plan and the code above will work ok. Like Stu said the GO statement is client directive not a sql keyword and only seems to be respected by isql, wsql, osql, etc, clients and the sql agent. I think it works in DTS packages. Obviously, not in DTSX, though. \n",
"You have a check datasbase integrity task and you double-clicked it choose MyDb and when the plan runs it only checks master?? weird. Are you sure you don't another plan running? \n"
] | [
1,
1,
0
] | [] | [] | [
"sql_server",
"sql_server_2005"
] | stackoverflow_0000012304_sql_server_sql_server_2005.txt |
Q:
Tools for automating mouse and keyboard events sent to a windows application
What tools are useful for automating clicking through a windows form application? Is this even useful? I see the testers at my company doing this a great deal and it seems like a waste of time.
A:
Check out https://github.com/TestStack/White and http://nunitforms.sourceforge.net/. We've used the White project with success.
A:
Though they're mostly targeted at automating administration tasks or shortcuts for users, Autohotkey and AutoIT let you automate nearly anything you want as far as mouse/keyboard interaction.
Some of the mouse stuff can get tricky when the only way to really tell it what you want to click is an X,Y coordinate, but for automating entirely arbitrary tasks on a Windows machine, it does the trick.
Like I said, they're not necessarily intended for testing purposes, so they're not instrumented for unit test conventions. However, I use them all of the time to automate stuff that isn't testing related.
A:
You can do it programmatically via the Microsoft UI Automation API. There's an MSDN Magazine article about it.
Integrates well with unit test frameworks. A better option than the coordinate-based script runners because you don't have to rewrite scripts when layouts change.
A:
There's a couple out there. They all hook into the windows API to log item clicks, and then reproduce them to test.
We're now mostly web based (using WatiN), but we used to use Mercury Quicktest.
Don't use Quicktest, it's awful for a tremendously long list of reasons.
A:
This is what i was looking for.
Check out http://www.codeplex.com/white and http://nunitforms.sourceforge.net/. We've used the White project with success.
| Tools for automating mouse and keyboard events sent to a windows application | What tools are useful for automating clicking through a windows form application? Is this even useful? I see the testers at my company doing this a great deal and it seems like a waste of time.
| [
"Check out https://github.com/TestStack/White and http://nunitforms.sourceforge.net/. We've used the White project with success.\n",
"Though they're mostly targeted at automating administration tasks or shortcuts for users, Autohotkey and AutoIT let you automate nearly anything you want as far as mouse/keyboard interaction. \nSome of the mouse stuff can get tricky when the only way to really tell it what you want to click is an X,Y coordinate, but for automating entirely arbitrary tasks on a Windows machine, it does the trick.\nLike I said, they're not necessarily intended for testing purposes, so they're not instrumented for unit test conventions. However, I use them all of the time to automate stuff that isn't testing related.\n",
"You can do it programmatically via the Microsoft UI Automation API. There's an MSDN Magazine article about it.\nIntegrates well with unit test frameworks. A better option than the coordinate-based script runners because you don't have to rewrite scripts when layouts change.\n",
"There's a couple out there. They all hook into the windows API to log item clicks, and then reproduce them to test.\nWe're now mostly web based (using WatiN), but we used to use Mercury Quicktest. \nDon't use Quicktest, it's awful for a tremendously long list of reasons.\n",
"This is what i was looking for.\nCheck out http://www.codeplex.com/white and http://nunitforms.sourceforge.net/. We've used the White project with success.\n"
] | [
7,
7,
1,
0,
0
] | [] | [] | [
"automation",
"testing"
] | stackoverflow_0000012523_automation_testing.txt |
Q:
Access Control Lists & Access Control Objects, good tutorial?
we're developing a web app to cover all aspects of a printing company from finances, to payroll, to job costing. Its important to be able to control who can access what parts of these applications. Don't want a line employee giving himself a raise, etc...
I've heard of the concept of ACL & ACO, but haven't found a good example that we could adapt to our project.
Anyone know where I can find good information to work from?
A:
A brief rundown on ACLs, where they should be used and how they should be structured and implemented for various applications and user levels can be found here:
LINK
A:
I've had to implement that type of security a couple of times. Unfortunately I don't know of any really good articles that provide examples. My implementations were mainly piecing together the parts through trial and error.
However, I did come across this link on MSDN:
http://msdn.microsoft.com/en-us/library/52kd59t0(VS.71).aspx
It has some of the concepts.
After my original post, I did some more research. I found this article:
http://www.aspfree.com/c/a/C-Sharp/Implementing-Role-Based-Security-using-CSharp/
it seems pretty promising, I didn't go through all the details, but it at least guides you through the high-level topics.
A:
If you're using .NET/Windows you might want to look into Windows Authorization Manager (AzMan). There are support for AzMan in Enterprise Library but there are other ways of using it as well.
http://msdn.microsoft.com/en-us/library/ms998336.aspx
http://alt.pluralsight.com/wiki/default.aspx/Keith.GuideBook/WhatIsAuthorizationManager.html
| Access Control Lists & Access Control Objects, good tutorial? | we're developing a web app to cover all aspects of a printing company from finances, to payroll, to job costing. Its important to be able to control who can access what parts of these applications. Don't want a line employee giving himself a raise, etc...
I've heard of the concept of ACL & ACO, but haven't found a good example that we could adapt to our project.
Anyone know where I can find good information to work from?
| [
"A brief rundown on ACLs, where they should be used and how they should be structured and implemented for various applications and user levels can be found here:\nLINK\n",
"I've had to implement that type of security a couple of times. Unfortunately I don't know of any really good articles that provide examples. My implementations were mainly piecing together the parts through trial and error.\nHowever, I did come across this link on MSDN:\nhttp://msdn.microsoft.com/en-us/library/52kd59t0(VS.71).aspx\nIt has some of the concepts.\n\nAfter my original post, I did some more research. I found this article:\nhttp://www.aspfree.com/c/a/C-Sharp/Implementing-Role-Based-Security-using-CSharp/\nit seems pretty promising, I didn't go through all the details, but it at least guides you through the high-level topics.\n",
"If you're using .NET/Windows you might want to look into Windows Authorization Manager (AzMan). There are support for AzMan in Enterprise Library but there are other ways of using it as well.\nhttp://msdn.microsoft.com/en-us/library/ms998336.aspx\nhttp://alt.pluralsight.com/wiki/default.aspx/Keith.GuideBook/WhatIsAuthorizationManager.html\n"
] | [
2,
1,
1
] | [] | [] | [
"acl",
"permissions"
] | stackoverflow_0000012612_acl_permissions.txt |
Q:
Add .NET 2.0 SP1 as a prerequisite for deployment project
I have a .NET 2.0 application that has recently had contributions that are Service Pack 1 dependent. The deployment project has detected .NET 2.0 as a prerequisite, but NOT SP1. How do I include SP1 as a dependency/prerequisite in my deployment project?
A:
You'll want to setup launch condition in your deployment project to make sure version 2.0 SP1 is installed. You'll want to set a requirement based off the MsiNetAssemblySupport variable, tied to the version number of .NET 2.0 SP1 (2.0.50727.1433 and above according to this page.)
Bootstrapping the project to actually download the framework if it isn't installed is a different matter, and there are plenty of articles out there on how to do that.
| Add .NET 2.0 SP1 as a prerequisite for deployment project | I have a .NET 2.0 application that has recently had contributions that are Service Pack 1 dependent. The deployment project has detected .NET 2.0 as a prerequisite, but NOT SP1. How do I include SP1 as a dependency/prerequisite in my deployment project?
| [
"You'll want to setup launch condition in your deployment project to make sure version 2.0 SP1 is installed. You'll want to set a requirement based off the MsiNetAssemblySupport variable, tied to the version number of .NET 2.0 SP1 (2.0.50727.1433 and above according to this page.)\nBootstrapping the project to actually download the framework if it isn't installed is a different matter, and there are plenty of articles out there on how to do that.\n"
] | [
3
] | [] | [] | [
".net_2.0",
"dependencies",
"installation"
] | stackoverflow_0000012720_.net_2.0_dependencies_installation.txt |
Q:
Setting up Subversion on Windows as a service
When installing subversion as a service, I used this command:
c:\>svnservice -install --daemon --root "c:\documents and settings\my_repository"
And then I got this error:
Could not create service in service control manager.
After looking at some MSDN docs on the service control manager, I tried granting full control to everyone in the permissions on the registry key at HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services, but that hasn't had any effect.
Anybody know what I did wrong, or how to overcome this?
Note #1: I am running as an administrator on this box
*Note #2: I was following the instructions given here, so maybe my choice of directory is misguided. And my repository is not actually called "my_repository". I used the name of an actual project which is currently under source control in gasp VSS.*
A:
VisualSVN Server installs as a Windows service. It is free, includes Apache, OpenSSL, and a repository / permission management tool. It can also integrate with Active Directory for user authentication. I highly recommend it for hosting SVN on Windows.
A:
I've followed the instructions given at the Collabnet site:
http://svn.apache.org/repos/asf/subversion/trunk/notes/windows-service.txt
They use the windows SC to create the service (which runs svnserve). This has worked for me without any problems (using svn 1.4 and 1.5)
A:
I think svnservice is obsolete, because since 1.4, svnserve itself has been able to run as a Windows service. (svnserve comes as a part of the normal SVN binary distribution)
http://svn.apache.org/repos/asf/subversion/trunk/notes/windows-service.txt contains the details of how to set it up.
And the binaries you want are here: http://subversion.tigris.org/servlets/ProjectDocumentList?folderID=91
But as others have said, there are now more friendly packages containing the svn stuff - VisualSVN Server (so badly named it makes me weep) and the Collabnet distribution - the later is Apache only, and is hand rolled on the thighs of virgins, which means that it always seems to appear about three weeks later than everyone else.
A:
I've never used the command line installer for this. I assume you are downloading the latest from:
http://svnservice.tigris.org/
I run the installer, and then use the configuration tool (in the Start Menu, SVN Service, SVN Service Administration) to set it up.
A:
The only thing I can currently think of, is the following: make sure you're running under an administrator account. That's absolutely necessary to install a service, AFAIK.
Have fun with Subversion, btw :)
A:
I'd suggest you move your repository to somewhere a little safer, maybe "c:\SVNRepo".
I'd hesitate to put the Repository in "Documents and Settings". Is your repository actually called "my_repository"?
A:
I recommend you tu use Visual SVN Server. Very easy to install
| Setting up Subversion on Windows as a service | When installing subversion as a service, I used this command:
c:\>svnservice -install --daemon --root "c:\documents and settings\my_repository"
And then I got this error:
Could not create service in service control manager.
After looking at some MSDN docs on the service control manager, I tried granting full control to everyone in the permissions on the registry key at HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services, but that hasn't had any effect.
Anybody know what I did wrong, or how to overcome this?
Note #1: I am running as an administrator on this box
*Note #2: I was following the instructions given here, so maybe my choice of directory is misguided. And my repository is not actually called "my_repository". I used the name of an actual project which is currently under source control in gasp VSS.*
| [
"VisualSVN Server installs as a Windows service. It is free, includes Apache, OpenSSL, and a repository / permission management tool. It can also integrate with Active Directory for user authentication. I highly recommend it for hosting SVN on Windows.\n",
"I've followed the instructions given at the Collabnet site:\nhttp://svn.apache.org/repos/asf/subversion/trunk/notes/windows-service.txt\nThey use the windows SC to create the service (which runs svnserve). This has worked for me without any problems (using svn 1.4 and 1.5)\n",
"I think svnservice is obsolete, because since 1.4, svnserve itself has been able to run as a Windows service. (svnserve comes as a part of the normal SVN binary distribution)\nhttp://svn.apache.org/repos/asf/subversion/trunk/notes/windows-service.txt contains the details of how to set it up.\nAnd the binaries you want are here: http://subversion.tigris.org/servlets/ProjectDocumentList?folderID=91\nBut as others have said, there are now more friendly packages containing the svn stuff - VisualSVN Server (so badly named it makes me weep) and the Collabnet distribution - the later is Apache only, and is hand rolled on the thighs of virgins, which means that it always seems to appear about three weeks later than everyone else.\n",
"I've never used the command line installer for this. I assume you are downloading the latest from:\nhttp://svnservice.tigris.org/\nI run the installer, and then use the configuration tool (in the Start Menu, SVN Service, SVN Service Administration) to set it up.\n",
"The only thing I can currently think of, is the following: make sure you're running under an administrator account. That's absolutely necessary to install a service, AFAIK.\nHave fun with Subversion, btw :)\n",
"I'd suggest you move your repository to somewhere a little safer, maybe \"c:\\SVNRepo\". \nI'd hesitate to put the Repository in \"Documents and Settings\". Is your repository actually called \"my_repository\"?\n",
"I recommend you tu use Visual SVN Server. Very easy to install\n"
] | [
6,
1,
1,
0,
0,
0,
0
] | [] | [] | [
"svn",
"version_control"
] | stackoverflow_0000012718_svn_version_control.txt |
Q:
Unit testing a timer based application?
I am currently writing a simple, timer based mini app in C# that performs an action n times every k seconds.
I am trying to adopt a test driven development style, so my goal is to unit test all parts of the app.
So, my question is: Is there a good way to unit test a timer based class?
The problem, as I see it, is that there is a big risk that the tests will take uncomfortably long to execute, since they must wait so and so long for the desired actions to happen.
Especially if one wants realistic data (seconds), instead of using the minimal time resolution allowed by the framework (1 ms?).
I am using a mock object for the action, to register the number of times the action was called, and so that the action takes practically no time.
A:
What I have done is to mock the timer, and also the current system time, that my events could be triggered immediately, but as far as the code under test was concerned time elapsed was seconds.
A:
Len Holgate has a series of 20 articles on testing timer based code.
A:
I think what I would do in this case is test the code that actually executes when the timer ticks, rather than the entire sequence. What you really need to decide is whether it is worthwhile for you to test the actual behaviour of the application (for example, if what happens after every tick changes drastically from one tick to another), or whether it is sufficient (that is to say, the action is the same every time) to just test your logic.
Since the timer's behaviour is guaranteed never to change, it's either going to work properly (ie, you've configured it right) or not; it seems to be to be wasted effort to include that in your test if you don't actually need to.
A:
I agree with Danny insofar as it probably makes sense from a unit-testing perspective to simply forget about the timer mechanism and just verify that the action itself works as expected. I would also say that I disagree in that it's wasted effort to include the configuration of the timer in an automated test suite of some kind. There are a lot of edge cases when it comes to working with timing applications and it's very easy to create a false sense of security by only testing the things that are easy to test.
I would recommend having a suite of tests that runs the timer as well as the real action. This suite will probably take a while to run and would likely not be something you would run all the time on your local machine. But setting these types of things up on a nightly automated build can really help root out bugs before they become too hard to find and fix.
So in short my answer to your question is don't worry about writing a few tests that do take a long time to run. Unit test what you can and make that test suite run fast and often but make sure to supplement that with integration tests that run less frequently but cover more of the application and its configuration.
| Unit testing a timer based application? | I am currently writing a simple, timer based mini app in C# that performs an action n times every k seconds.
I am trying to adopt a test driven development style, so my goal is to unit test all parts of the app.
So, my question is: Is there a good way to unit test a timer based class?
The problem, as I see it, is that there is a big risk that the tests will take uncomfortably long to execute, since they must wait so and so long for the desired actions to happen.
Especially if one wants realistic data (seconds), instead of using the minimal time resolution allowed by the framework (1 ms?).
I am using a mock object for the action, to register the number of times the action was called, and so that the action takes practically no time.
| [
"What I have done is to mock the timer, and also the current system time, that my events could be triggered immediately, but as far as the code under test was concerned time elapsed was seconds.\n",
"Len Holgate has a series of 20 articles on testing timer based code.\n",
"I think what I would do in this case is test the code that actually executes when the timer ticks, rather than the entire sequence. What you really need to decide is whether it is worthwhile for you to test the actual behaviour of the application (for example, if what happens after every tick changes drastically from one tick to another), or whether it is sufficient (that is to say, the action is the same every time) to just test your logic.\nSince the timer's behaviour is guaranteed never to change, it's either going to work properly (ie, you've configured it right) or not; it seems to be to be wasted effort to include that in your test if you don't actually need to.\n",
"I agree with Danny insofar as it probably makes sense from a unit-testing perspective to simply forget about the timer mechanism and just verify that the action itself works as expected. I would also say that I disagree in that it's wasted effort to include the configuration of the timer in an automated test suite of some kind. There are a lot of edge cases when it comes to working with timing applications and it's very easy to create a false sense of security by only testing the things that are easy to test.\nI would recommend having a suite of tests that runs the timer as well as the real action. This suite will probably take a while to run and would likely not be something you would run all the time on your local machine. But setting these types of things up on a nightly automated build can really help root out bugs before they become too hard to find and fix.\nSo in short my answer to your question is don't worry about writing a few tests that do take a long time to run. Unit test what you can and make that test suite run fast and often but make sure to supplement that with integration tests that run less frequently but cover more of the application and its configuration.\n"
] | [
11,
11,
4,
2
] | [] | [] | [
".net",
"c#",
"timer",
"unit_testing"
] | stackoverflow_0000012045_.net_c#_timer_unit_testing.txt |
Q:
How is the HTML on this site so clean?
I work with C# at work but dislike how with webforms it spews out a lot of JavaScript not including the many lines for viewstate that it creates.
That's why I like coding with PHP as I have full control.
But I was just wondering how this sites HTML is so clean and elegant?
Does using MVC have something to do with it? I see that JQuery is used but surely you still use asp:required validators? If you do, where is all the hideous code that it normally produces?
And if they arent using required field validators, why not? Surely it's quicker to develop in than using JQuery?
One of the main reasons I code my personal sites in PHP was due to the more elegant HTML that it produces but if I can produce code like this site then I will go full time .net!
A:
One of the goals of ASP.NET MVC is to give you control of your markup. However, there have always been choices with ASP.NET which would allow you to generate relatively clean HTML.
For instance, ASP.NET has always offered a choice with validator controls. Do you value development speed over markup? Use validators. Value markup over development speed? Pick another validation mechanism. Your comments on validators are kind of contradictory there - it's possible to use ASP.NET and still make choices for markup purity over development speed.
Also, with webforms, we've had the CSS Friendly Control Adapters for a few years which will modify the controls to render more semantic markup. ASP.NET 3.5 included the ListView, which makes it really easy to write repeater type controls which emit semantic HTML. We used ASP.NET webforms on the Microsoft PDC site and have kept the HTML pretty clean: http://microsoftpdc.com/Agenda/Speakers.aspx - the Viewstate could probably be disabled on most pages, although in reality it's only a few dozen bytes.
A:
You were on the right track. It is the fact that they are using the ASP.NET MVC web framework. It allows you to have full control of your output html.
A:
The ASP.NET MVC Framework is an alternative to the normal "web forms" way of doing ASP.NET development. With it you lose a lot of abstraction, but gain a lot of control.
A:
Yes - MVC doesn't utilize the ASP.NET view state junk.
| How is the HTML on this site so clean? | I work with C# at work but dislike how with webforms it spews out a lot of JavaScript not including the many lines for viewstate that it creates.
That's why I like coding with PHP as I have full control.
But I was just wondering how this sites HTML is so clean and elegant?
Does using MVC have something to do with it? I see that JQuery is used but surely you still use asp:required validators? If you do, where is all the hideous code that it normally produces?
And if they arent using required field validators, why not? Surely it's quicker to develop in than using JQuery?
One of the main reasons I code my personal sites in PHP was due to the more elegant HTML that it produces but if I can produce code like this site then I will go full time .net!
| [
"One of the goals of ASP.NET MVC is to give you control of your markup. However, there have always been choices with ASP.NET which would allow you to generate relatively clean HTML.\nFor instance, ASP.NET has always offered a choice with validator controls. Do you value development speed over markup? Use validators. Value markup over development speed? Pick another validation mechanism. Your comments on validators are kind of contradictory there - it's possible to use ASP.NET and still make choices for markup purity over development speed.\nAlso, with webforms, we've had the CSS Friendly Control Adapters for a few years which will modify the controls to render more semantic markup. ASP.NET 3.5 included the ListView, which makes it really easy to write repeater type controls which emit semantic HTML. We used ASP.NET webforms on the Microsoft PDC site and have kept the HTML pretty clean: http://microsoftpdc.com/Agenda/Speakers.aspx - the Viewstate could probably be disabled on most pages, although in reality it's only a few dozen bytes.\n",
"You were on the right track. It is the fact that they are using the ASP.NET MVC web framework. It allows you to have full control of your output html.\n",
"The ASP.NET MVC Framework is an alternative to the normal \"web forms\" way of doing ASP.NET development. With it you lose a lot of abstraction, but gain a lot of control.\n",
"Yes - MVC doesn't utilize the ASP.NET view state junk.\n"
] | [
9,
3,
3,
2
] | [] | [] | [
"html",
"semantic_markup"
] | stackoverflow_0000012768_html_semantic_markup.txt |
Q:
Upload binary data with Silverlight 2b2
I am trying to upload a file or stream of data to our web server and I cant find a decent way of doing this. I have tried both WebClient and WebRequest both have their problems.
WebClient
Nice and easy but you do not get any notification that the asynchronous upload has completed, and the UploadProgressChanged event doesnt get called back with anything useful. The alternative is to convert your binary data to a string and use UploadStringASync because then at least you get a UploadStringCompleted, problem is you need a lot of ram for big files as its encoding all the data and uploading it in one go.
HttpWebRequest
Bit more complicated but still does what is needed, problem I am getting is that even though it is called on a background thread (supposedly), it still seems to be blocking my UI and the whole browser until the upload has completed which doesnt seem quite right.
Normal .net does have some appropriate WebClient methods for OnUploadDataCompleted and progress but these arent available in Silverlight .net ... big omission I think!
Does anyone have any solutions, I need to upload multiple binary files preferrably with a progress but I need to perform some actions when the files have completed their upload.
Look forward to some help with this.
A:
The way i get around it is through INotifyPropertyChanged and event notification.
The essentials:
public void DoIt(){
this.IsUploading = True;
WebRequest postRequest = WebRequest.Create(new Uri(ServiceURL));
postRequest.BeginGetRequestStream(new AsyncCallback(RequestOpened), postRequest);
}
private void RequestOpened(IAsyncResult result){
WebRequest req = result.AsyncState as WebRequest;
req.BeginGetResponse(new AsyncCallback(GetResponse), req);
}
private void GetResponse(IAsyncResult result)
{
WebRequest req = result.AsyncState as WebRequest;
string serverresult = string.Empty;
WebResponse postResponse = req.EndGetResponse(result);
StreamReader responseReader = new StreamReader(postResponse.GetResponseStream());
this.IsUploading= False;
}
private Bool_IsUploading;
public Bool IsUploading
{
get { return _IsUploading; }
private set
{
_IsUploading = value;
OnPropertyChanged("IsUploading");
}
}
Right now silverlight is a PiTA because of the double and triple Async calls.
A:
Matt Berseth had some thoughts in this, might help:
http://mattberseth.com/blog/2008/07/aspnet_file_upload_with_realti_1.html
@Dan - Apologies mate, I coulda sworn Matt's article was about Silverlight, but it's quite clearly not. Blame it on those two big glasses of Chilean red I just downed. :-)
A:
Thanks, problem I can see with the article is that its not talking about Silverlight, and Silverlight has limited functions, for some reason they have removed some necessary events and methods for binary transfers for no reason.
I need to use Silverlight as I need/want to upload multiple files and HTML does not allow for a multiple file upload.
A:
This was pretty much what I was doing, the problem I was getting was that my UI was getting locked up.
As you suggested what I was doing already, I presumed the problem was somewhere else so I used the old divide and conquer to narrow down the problem and it wasnt the actual update code, it was my attempt to Dispatch a request to update my progress bar during the upload stream code.
Thanks for the advice.
| Upload binary data with Silverlight 2b2 | I am trying to upload a file or stream of data to our web server and I cant find a decent way of doing this. I have tried both WebClient and WebRequest both have their problems.
WebClient
Nice and easy but you do not get any notification that the asynchronous upload has completed, and the UploadProgressChanged event doesnt get called back with anything useful. The alternative is to convert your binary data to a string and use UploadStringASync because then at least you get a UploadStringCompleted, problem is you need a lot of ram for big files as its encoding all the data and uploading it in one go.
HttpWebRequest
Bit more complicated but still does what is needed, problem I am getting is that even though it is called on a background thread (supposedly), it still seems to be blocking my UI and the whole browser until the upload has completed which doesnt seem quite right.
Normal .net does have some appropriate WebClient methods for OnUploadDataCompleted and progress but these arent available in Silverlight .net ... big omission I think!
Does anyone have any solutions, I need to upload multiple binary files preferrably with a progress but I need to perform some actions when the files have completed their upload.
Look forward to some help with this.
| [
"The way i get around it is through INotifyPropertyChanged and event notification.\nThe essentials:\n public void DoIt(){\nthis.IsUploading = True; \n\n WebRequest postRequest = WebRequest.Create(new Uri(ServiceURL));\n\n postRequest.BeginGetRequestStream(new AsyncCallback(RequestOpened), postRequest);\n }\n\nprivate void RequestOpened(IAsyncResult result){\n WebRequest req = result.AsyncState as WebRequest;\n req.BeginGetResponse(new AsyncCallback(GetResponse), req);\n }\n\n private void GetResponse(IAsyncResult result)\n {\n WebRequest req = result.AsyncState as WebRequest;\n string serverresult = string.Empty;\n WebResponse postResponse = req.EndGetResponse(result);\n\n StreamReader responseReader = new StreamReader(postResponse.GetResponseStream());\n\nthis.IsUploading= False;\n}\n\n private Bool_IsUploading;\n public Bool IsUploading\n {\n get { return _IsUploading; }\n private set\n {\n\n _IsUploading = value;\n\n OnPropertyChanged(\"IsUploading\");\n }\n }\n\nRight now silverlight is a PiTA because of the double and triple Async calls. \n",
"Matt Berseth had some thoughts in this, might help:\nhttp://mattberseth.com/blog/2008/07/aspnet_file_upload_with_realti_1.html\n@Dan - Apologies mate, I coulda sworn Matt's article was about Silverlight, but it's quite clearly not. Blame it on those two big glasses of Chilean red I just downed. :-) \n",
"Thanks, problem I can see with the article is that its not talking about Silverlight, and Silverlight has limited functions, for some reason they have removed some necessary events and methods for binary transfers for no reason. \nI need to use Silverlight as I need/want to upload multiple files and HTML does not allow for a multiple file upload.\n",
"This was pretty much what I was doing, the problem I was getting was that my UI was getting locked up.\nAs you suggested what I was doing already, I presumed the problem was somewhere else so I used the old divide and conquer to narrow down the problem and it wasnt the actual update code, it was my attempt to Dispatch a request to update my progress bar during the upload stream code.\nThanks for the advice.\n"
] | [
1,
0,
0,
0
] | [] | [] | [
".net",
"silverlight"
] | stackoverflow_0000012642_.net_silverlight.txt |
Q:
Best way to custom edit records in ASP.NET?
I'm coming from a Rails background and doing some work on a ASP.NET project (not ASP MVC). Newbie question: what's the easiest way to make a custom editor for a table of records?
For example: I have a bunch of data rows and want to change the "category" field on each -- maybe a dropdown, maybe a link, maybe the user types it in.
In Rails, I'd iterate over the rows to build a table, and would have a form for each row. The form would have an input box or dropdown, and submit the data to a controller like "/item/edit/15?category=foo" where 15 was the itemID and the new category was "foo".
I'm new to the ASP.NET model and am not sure of the "right" way to do this -- just the simplest way to get back the new data & save it off. Would I make a custom control and append it to each row? Any help appreciated.
A:
You can REALLY cheat nowadays and take a peek at the new Dynamic Data that comes with .NET 3.5 SP1. Scott Guthrie has a blog entry demoing on how quick and easy it'll flow for you here:
http://weblogs.asp.net/scottgu/archive/2007/12/14/new-asp-net-dynamic-data-support.aspx
Without getting THAT cutting edge, I'd use the XSD generator to generate a strongly typed DataSet that coincides with the table in question. This will also generate the TableAdapter you can use to do all your CRUD statements.
From there, bind it to a DataGrid and leverage all the standard templates/events involved with that, such as EditIndex, SelectedIndex, RowEditing, RowUpdated, etc.
I've been doing this since the early 1.0 days of .NET and this kind of functionality has only gotten more and more streamlined with every update of the Framework.
EDIT: I want to give a quick nod to the Matt Berseth blog as well. I've been following a lot of his stuff for a while now and it is great!
A:
There are a few controls that will do this for you, with varying levels of complexity depending on their relative flexibility.
The traditional way to do this would be the DataGrid control, which gives you a table layout. If you want something with more flexibility in appearance, the DataList and ListView controls also have built-in support for editing, inserting or deleting fields as well.
Check out Matt Berseth's blog for some excellent examples of asp.net controls in action.
A:
Thanks for the answers guys. It looks like customizing the DataGrid is the way to go. For any ASP.NET newbies, here's what I'm doing
<asp:DataGrid ID="GridView1" runat="server" AutoGenerateColumns="False">
<Columns>
<asp:BoundColumn DataField="RuleID" Visible="False" HeaderText="RuleID"></asp:BoundColumn>
<asp:TemplateColumn HeaderText="Category">
<ItemTemplate>
<!-- in case we want to display an image -->
<asp:Literal ID="litImage" runat="server">
</asp:Literal>
<asp:DropDownList ID="categoryListDropdown" runat="server"></asp:DropDownList>
</ItemTemplate>
</asp:TemplateColumn>
</Columns>
</asp:DataGrid>
This creates a datagrid. We can then bind it to a data source (DataTable in my case) and use things like
foreach (DataGridItem item in this.GridView1.Items)
{
DropDownList categoryListDropdown = ((DropDownList)item.FindControl("categoryListDropdown"));
categoryListDropdown.Items.AddRange(listItems.ToArray());
}
to populate the intial dropdown in the data grid. You can also access item.Cells[0].text to get the RuleID in this case.
Notes for myself: The ASP.NET model does everything in the codebehind file. At a high level you can always iterate through GridView1.Items to get each row, and item.findControl("ControlID") to query the value stored at each item, such as after pressing an "Update" button.
| Best way to custom edit records in ASP.NET? | I'm coming from a Rails background and doing some work on a ASP.NET project (not ASP MVC). Newbie question: what's the easiest way to make a custom editor for a table of records?
For example: I have a bunch of data rows and want to change the "category" field on each -- maybe a dropdown, maybe a link, maybe the user types it in.
In Rails, I'd iterate over the rows to build a table, and would have a form for each row. The form would have an input box or dropdown, and submit the data to a controller like "/item/edit/15?category=foo" where 15 was the itemID and the new category was "foo".
I'm new to the ASP.NET model and am not sure of the "right" way to do this -- just the simplest way to get back the new data & save it off. Would I make a custom control and append it to each row? Any help appreciated.
| [
"You can REALLY cheat nowadays and take a peek at the new Dynamic Data that comes with .NET 3.5 SP1. Scott Guthrie has a blog entry demoing on how quick and easy it'll flow for you here:\nhttp://weblogs.asp.net/scottgu/archive/2007/12/14/new-asp-net-dynamic-data-support.aspx\nWithout getting THAT cutting edge, I'd use the XSD generator to generate a strongly typed DataSet that coincides with the table in question. This will also generate the TableAdapter you can use to do all your CRUD statements. \nFrom there, bind it to a DataGrid and leverage all the standard templates/events involved with that, such as EditIndex, SelectedIndex, RowEditing, RowUpdated, etc.\nI've been doing this since the early 1.0 days of .NET and this kind of functionality has only gotten more and more streamlined with every update of the Framework.\nEDIT: I want to give a quick nod to the Matt Berseth blog as well. I've been following a lot of his stuff for a while now and it is great!\n",
"There are a few controls that will do this for you, with varying levels of complexity depending on their relative flexibility. \nThe traditional way to do this would be the DataGrid control, which gives you a table layout. If you want something with more flexibility in appearance, the DataList and ListView controls also have built-in support for editing, inserting or deleting fields as well.\nCheck out Matt Berseth's blog for some excellent examples of asp.net controls in action.\n",
"Thanks for the answers guys. It looks like customizing the DataGrid is the way to go. For any ASP.NET newbies, here's what I'm doing\n<asp:DataGrid ID=\"GridView1\" runat=\"server\" AutoGenerateColumns=\"False\">\n <Columns>\n <asp:BoundColumn DataField=\"RuleID\" Visible=\"False\" HeaderText=\"RuleID\"></asp:BoundColumn>\n <asp:TemplateColumn HeaderText=\"Category\">\n <ItemTemplate>\n <!-- in case we want to display an image -->\n <asp:Literal ID=\"litImage\" runat=\"server\">\n </asp:Literal>\n <asp:DropDownList ID=\"categoryListDropdown\" runat=\"server\"></asp:DropDownList>\n </ItemTemplate>\n </asp:TemplateColumn>\n\n </Columns>\n</asp:DataGrid>\n\nThis creates a datagrid. We can then bind it to a data source (DataTable in my case) and use things like \nforeach (DataGridItem item in this.GridView1.Items)\n{\n DropDownList categoryListDropdown = ((DropDownList)item.FindControl(\"categoryListDropdown\"));\n categoryListDropdown.Items.AddRange(listItems.ToArray());\n}\n\nto populate the intial dropdown in the data grid. You can also access item.Cells[0].text to get the RuleID in this case.\nNotes for myself: The ASP.NET model does everything in the codebehind file. At a high level you can always iterate through GridView1.Items to get each row, and item.findControl(\"ControlID\") to query the value stored at each item, such as after pressing an \"Update\" button.\n"
] | [
2,
0,
0
] | [] | [] | [
"asp.net"
] | stackoverflow_0000012706_asp.net.txt |
Q:
Is a Homogeneous development platform good for the industry?
Is it in best interests of the software development industry for one framework, browser or language to win the war and become the de facto standard? On one side it takes away the challenges of cross platform, but it opens it up for a single point of failure. Would it also result in a stagnation of innovation, or would it allow the industry to focus on more important things (whatever those might be).
A:
Defacto standards are bad because they are usually controlled by a single party. What is best for the industry is for there to be a foundation of open standards on top of which everyone can compete.
The web is a perfect example. When IE won the browser war, it stagnated for years, and is only just now starting to improve because it's hemorrhaging marketshare. The Netscape years prior to that weren't much better. The CSS 2.1 standard was released ten years ago and still isn't supported well. As a consequence, web development is a Black Art of hacks and work-arounds to get websites to render consistently.
My job would be a hundred times easier if I could build a website according to web standards and be confident it would display correctly. Just think of all the cool things we could have been working on instead of fixing IE's rendering errors.
A:
I believe whenever there is only 1 option, it will definitely stagnate innovation. If all we had was 1 language, then we wouldn't be able to solve anything but what that language was designed to solve.
Imperative languages like Java and C# solve a certain set of problems pretty well, but it also helps to think in a functional manner sometimes, such as with Haskell and Lisp.
Furthermore, cross platform issues are not an issue if you are talking about a web application, because you control the hardware and software (note, I am talking of the server side code of course, the browser cross platform issue is separate).
Paul Graham wrote a great essay on how the Web lets you as a developer use the tool you think will solve the problem best.
A:
No. Competition is good. It may make a web developers job easier, but I think it's bad for the industry. I personally prefer having choices.
I believe Joel Spolsky's technique of creating his own language (Wasabi) to insulate his company from being platform specific is a good one. I also believe it is a good idea to use products that accomplish similar things that are more targeted at specific problems like JQuery.
A:
I'm gonna have to agree with Mike on this one and say that without competition there is very little incentive to innovate.
| Is a Homogeneous development platform good for the industry? | Is it in best interests of the software development industry for one framework, browser or language to win the war and become the de facto standard? On one side it takes away the challenges of cross platform, but it opens it up for a single point of failure. Would it also result in a stagnation of innovation, or would it allow the industry to focus on more important things (whatever those might be).
| [
"Defacto standards are bad because they are usually controlled by a single party. What is best for the industry is for there to be a foundation of open standards on top of which everyone can compete. \nThe web is a perfect example. When IE won the browser war, it stagnated for years, and is only just now starting to improve because it's hemorrhaging marketshare. The Netscape years prior to that weren't much better. The CSS 2.1 standard was released ten years ago and still isn't supported well. As a consequence, web development is a Black Art of hacks and work-arounds to get websites to render consistently.\nMy job would be a hundred times easier if I could build a website according to web standards and be confident it would display correctly. Just think of all the cool things we could have been working on instead of fixing IE's rendering errors.\n",
"I believe whenever there is only 1 option, it will definitely stagnate innovation. If all we had was 1 language, then we wouldn't be able to solve anything but what that language was designed to solve.\nImperative languages like Java and C# solve a certain set of problems pretty well, but it also helps to think in a functional manner sometimes, such as with Haskell and Lisp.\nFurthermore, cross platform issues are not an issue if you are talking about a web application, because you control the hardware and software (note, I am talking of the server side code of course, the browser cross platform issue is separate).\nPaul Graham wrote a great essay on how the Web lets you as a developer use the tool you think will solve the problem best.\n",
"No. Competition is good. It may make a web developers job easier, but I think it's bad for the industry. I personally prefer having choices. \nI believe Joel Spolsky's technique of creating his own language (Wasabi) to insulate his company from being platform specific is a good one. I also believe it is a good idea to use products that accomplish similar things that are more targeted at specific problems like JQuery.\n",
"I'm gonna have to agree with Mike on this one and say that without competition there is very little incentive to innovate.\n"
] | [
11,
2,
0,
0
] | [] | [] | [
"cross_platform"
] | stackoverflow_0000012929_cross_platform.txt |
Q:
Windows/C++: How do I determine the share name associated with a shared drive?
Let's say I have a drive such as C:\, and I want to find out if it's shared and what it's share name (e.g. C$) is.
To find out if it's shared, I can use NetShareCheck.
How do I then map the drive to its share name? I thought that NetShareGetInfo would do it, but it looks like that takes the share name, not the local drive name, as an input.
A:
If all else fails, you could always use NetShareEnum and call NetShareGetInfo on each.
A:
I believe you're looking for WNetGetConnectionA or WNetGetConnectionW.
A:
Use;
SHGetFileInfo with SHGFI_ATTRIBUTES
upon return check the dwAttributes flag for SFGAO_SHARE.
I'm not sure how to find the actual path tho.
| Windows/C++: How do I determine the share name associated with a shared drive? | Let's say I have a drive such as C:\, and I want to find out if it's shared and what it's share name (e.g. C$) is.
To find out if it's shared, I can use NetShareCheck.
How do I then map the drive to its share name? I thought that NetShareGetInfo would do it, but it looks like that takes the share name, not the local drive name, as an input.
| [
"If all else fails, you could always use NetShareEnum and call NetShareGetInfo on each.\n",
"I believe you're looking for WNetGetConnectionA or WNetGetConnectionW.\n",
"Use;\nSHGetFileInfo with SHGFI_ATTRIBUTES\n\nupon return check the dwAttributes flag for SFGAO_SHARE.\nI'm not sure how to find the actual path tho.\n"
] | [
3,
1,
0
] | [] | [] | [
"c++",
"networking",
"share",
"windows"
] | stackoverflow_0000012594_c++_networking_share_windows.txt |
Q:
C# Database Access: DBNull vs null
We have our own ORM we use here, and provide strongly typed wrappers for all of our db tables. We also allow weakly typed ad-hoc SQL to be executed, but these queries still go through the same class for getting values out of a data reader.
In tweaking that class to work with Oracle, we've come across an interesting question. Is it better to use DBNull.Value, or null? Are there any benefits to using DBNull.Value? It seems more "correct" to use null, since we've separated ourselves from the DB world, but there are implications (you can't just blindly ToString() when a value is null for example) so its definitely something we need to make a conscious decision about.
A:
I find it better to use null, instead of DB null.
The reason is because, as you said, you're separating yourself from the DB world.
It is generally good practice to check reference types to ensure they aren't null anyway. You're going to be checking for null for things other than DB data, and I find it is best to keep consistency across the system, and use null, not DBNull.
In the long run, architecturally I find it to be the better solution.
A:
If you've written your own ORM, then I would say just use null, since you can use it however you want. I believe DBNull was originally used only to get around the fact that value types (int, DateTime, etc.) could not be null, so rather than return some value like zero or DateTime.Min, which would imply a null (bad, bad), they created DBNull to indicate this. Maybe there was more to it, but I always assumed that was the reason. However, now that we have nullable types in C# 3.0, DBNull is no longer necessary. In fact, LINQ to SQL just uses null all over the place. No problem at all. Embrace the future... use null. ;-)
A:
From the experience I've had, the .NET DataTables and TableAdapters work better with DBNull. It also opens up a few special methods when strongly typed, such as DataRow.IsFirstNameNull when in place.
I wish I could give you a better technical answer than that, but for me the bottom line is use DBNull when working with the database related objects and then use a "standard" null when I'm dealing with objects and .NET related code.
A:
Use DBNull.
We encouintered some sort of problems when using null.
If I recall correctly you cannot INSERT a null value to a field, only DBNull.
Could be Oracle related only, sorry, I do not know the details anymore.
| C# Database Access: DBNull vs null | We have our own ORM we use here, and provide strongly typed wrappers for all of our db tables. We also allow weakly typed ad-hoc SQL to be executed, but these queries still go through the same class for getting values out of a data reader.
In tweaking that class to work with Oracle, we've come across an interesting question. Is it better to use DBNull.Value, or null? Are there any benefits to using DBNull.Value? It seems more "correct" to use null, since we've separated ourselves from the DB world, but there are implications (you can't just blindly ToString() when a value is null for example) so its definitely something we need to make a conscious decision about.
| [
"I find it better to use null, instead of DB null.\nThe reason is because, as you said, you're separating yourself from the DB world.\nIt is generally good practice to check reference types to ensure they aren't null anyway. You're going to be checking for null for things other than DB data, and I find it is best to keep consistency across the system, and use null, not DBNull.\nIn the long run, architecturally I find it to be the better solution.\n",
"If you've written your own ORM, then I would say just use null, since you can use it however you want. I believe DBNull was originally used only to get around the fact that value types (int, DateTime, etc.) could not be null, so rather than return some value like zero or DateTime.Min, which would imply a null (bad, bad), they created DBNull to indicate this. Maybe there was more to it, but I always assumed that was the reason. However, now that we have nullable types in C# 3.0, DBNull is no longer necessary. In fact, LINQ to SQL just uses null all over the place. No problem at all. Embrace the future... use null. ;-)\n",
"From the experience I've had, the .NET DataTables and TableAdapters work better with DBNull. It also opens up a few special methods when strongly typed, such as DataRow.IsFirstNameNull when in place.\nI wish I could give you a better technical answer than that, but for me the bottom line is use DBNull when working with the database related objects and then use a \"standard\" null when I'm dealing with objects and .NET related code.\n",
"Use DBNull.\nWe encouintered some sort of problems when using null.\nIf I recall correctly you cannot INSERT a null value to a field, only DBNull.\nCould be Oracle related only, sorry, I do not know the details anymore.\n"
] | [
15,
7,
3,
1
] | [] | [] | [
"c#",
"dbnull",
"null",
"orm"
] | stackoverflow_0000012836_c#_dbnull_null_orm.txt |
Q:
Best way to capture key events in NSTextView?
I'm slowly learning Objective-C and Cocoa, and the only way I see so far to capture key events in Text Views is to use delegation, but I'm having trouble finding useful documentation and examples on how to implement such a solution. Can anyone point me in the right direction or supply some first-hand help?
A:
Generally, the way you implement it is simply to add the required function to your view's controller, and set its delegate. For example, if you want code to run when the view loads, you just delegate your view to the controller, and implement the awakeFromNib function.
So, to detect a key press in a text view, make sure your controller is the text view's delegate, and then implement this:
- (void)keyUp:(NSEvent *)theEvent
Note that this is an inherited NSResponder method, not a NSTextView method.
A:
Just a tip for syntax highlighting:
Don't highlight the whole text view at once - it's very slow. Also don't highlight the last edited text using -editedRange - it's very slow too if the user pastes a large body of text into the text view.
Instead you need to highlight the visible text which is done like this:
NSRect visibleRect = [[[textView enclosingScrollView] contentView] documentVisibleRect];
NSRange visibleRange = [[textView layoutManager] glyphRangeForBoundingRect:visibleRect inTextContainer:[textView textContainer]];
Then you feed visibleRange to your highlighting code.
A:
It's important to tell us what you're really trying to accomplish — the higher-level goal that you think capturing key events in an NSTextView will address.
For example, when someone asks me how to capture key events in an NSTextField what they really want to know is how to validate input in the field. That's done by setting the field's formatter to an instance of NSFormatter (whether one of the formatters included in Cocoa or a custom one), not by processing keystrokes directly.
So given that example, what are you really trying to accomplish?
A:
I've done some hard digging, and I did find an answer to my own question. I'll get at it below, but thanks to the two fellas who replied. I think that Stack Overflow is a fantastic site already--I hope more Mac developers find their way in once the beta is over--this could be a great resource for other developers looking to transition to the platform.
So, I did, as suggested by Danny, find my answer in delegation. What I didn't understand from Danny's post was that there are a set of delegate-enabled methods in the delegating object, and that the delegate must implement said events. And so for a TextView, I was able to find the method textDidChange, which accomplished what I wanted in an even better way than simply capturing key presses would have done. So if I implement this in my controller:
- (void)textDidChange:(NSNotification *)aNotification;
I can respond to the text being edited. There are, of course, other methods available, and I'm excited to play with them, because I know I'll learn a whole lot as I do. Thanks again, guys.
| Best way to capture key events in NSTextView? | I'm slowly learning Objective-C and Cocoa, and the only way I see so far to capture key events in Text Views is to use delegation, but I'm having trouble finding useful documentation and examples on how to implement such a solution. Can anyone point me in the right direction or supply some first-hand help?
| [
"Generally, the way you implement it is simply to add the required function to your view's controller, and set its delegate. For example, if you want code to run when the view loads, you just delegate your view to the controller, and implement the awakeFromNib function.\nSo, to detect a key press in a text view, make sure your controller is the text view's delegate, and then implement this:\n- (void)keyUp:(NSEvent *)theEvent\n\nNote that this is an inherited NSResponder method, not a NSTextView method.\n",
"Just a tip for syntax highlighting:\nDon't highlight the whole text view at once - it's very slow. Also don't highlight the last edited text using -editedRange - it's very slow too if the user pastes a large body of text into the text view.\nInstead you need to highlight the visible text which is done like this:\nNSRect visibleRect = [[[textView enclosingScrollView] contentView] documentVisibleRect];\nNSRange visibleRange = [[textView layoutManager] glyphRangeForBoundingRect:visibleRect inTextContainer:[textView textContainer]];\n\nThen you feed visibleRange to your highlighting code.\n",
"It's important to tell us what you're really trying to accomplish — the higher-level goal that you think capturing key events in an NSTextView will address.\nFor example, when someone asks me how to capture key events in an NSTextField what they really want to know is how to validate input in the field. That's done by setting the field's formatter to an instance of NSFormatter (whether one of the formatters included in Cocoa or a custom one), not by processing keystrokes directly.\nSo given that example, what are you really trying to accomplish?\n",
"I've done some hard digging, and I did find an answer to my own question. I'll get at it below, but thanks to the two fellas who replied. I think that Stack Overflow is a fantastic site already--I hope more Mac developers find their way in once the beta is over--this could be a great resource for other developers looking to transition to the platform.\nSo, I did, as suggested by Danny, find my answer in delegation. What I didn't understand from Danny's post was that there are a set of delegate-enabled methods in the delegating object, and that the delegate must implement said events. And so for a TextView, I was able to find the method textDidChange, which accomplished what I wanted in an even better way than simply capturing key presses would have done. So if I implement this in my controller:\n- (void)textDidChange:(NSNotification *)aNotification;\n\nI can respond to the text being edited. There are, of course, other methods available, and I'm excited to play with them, because I know I'll learn a whole lot as I do. Thanks again, guys.\n"
] | [
14,
10,
2,
1
] | [] | [] | [
"cocoa",
"events",
"objective_c"
] | stackoverflow_0000011291_cocoa_events_objective_c.txt |
Q:
How do I convert a date to a HTTP-formatted date in .Net / C#
How does one convert a .Net DateTime into a valid HTTP-formatted date string?
A:
Dates can be converted to HTTP valid dates (RFC 1123) by using the "r" format string in .Net. HTTP dates need to be GMT / not offset - this can be done using the ToUniversalTime() method.
So, in C# for example:
string HttpDate = SomeDate.ToUniversalTime().ToString("r");
Right now, that produces HttpDate = "Sat, 16 Aug 2008 10:38:39 GMT"
See Standard Date and Time Format Strings for a list of .Net standard date & time format strings.
See Protocol Parameters for the HTTP date specification, and background to other valid (but dated) RFC types for HTTP dates.
| How do I convert a date to a HTTP-formatted date in .Net / C# | How does one convert a .Net DateTime into a valid HTTP-formatted date string?
| [
"Dates can be converted to HTTP valid dates (RFC 1123) by using the \"r\" format string in .Net. HTTP dates need to be GMT / not offset - this can be done using the ToUniversalTime() method.\nSo, in C# for example:\nstring HttpDate = SomeDate.ToUniversalTime().ToString(\"r\");\n\nRight now, that produces HttpDate = \"Sat, 16 Aug 2008 10:38:39 GMT\"\nSee Standard Date and Time Format Strings for a list of .Net standard date & time format strings.\nSee Protocol Parameters for the HTTP date specification, and background to other valid (but dated) RFC types for HTTP dates.\n"
] | [
90
] | [] | [] | [
".net",
"c#",
"http"
] | stackoverflow_0000013087_.net_c#_http.txt |
Q:
What do ref, val and out mean on method parameters?
I'm looking for a clear, concise and accurate answer.
Ideally as the actual answer, although links to good explanations welcome.
This also applies to VB.Net, but the keywords are different - ByRef and ByVal.
A:
By default (in C#), passing an object to a function actually passes a copy of the reference to that object. Changing the parameter itself only changes the value in the parameter, and not the variable that was specified.
void Test1(string param)
{
param = "new value";
}
string s1 = "initial value";
Test1(s1);
// s1 == "initial value"
Using out or ref passes a reference to the variable specified in the call to the function. Any changes to the value of an out or ref parameter will be passed back to the caller.
Both out and ref behave identically except for one slight difference: ref parameters are required to be initialised before calling, while out parameters can be uninitialised. By extension, ref parameters are guaranteed to be initialised at the start of the method, while out parameters are treated as uninitialised.
void Test2(ref string param)
{
param = "new value";
}
void Test3(out string param)
{
// Use of param here will not compile
param = "another value";
}
string s2 = "initial value";
string s3;
Test2(ref s2);
// s2 == "new value"
// Test2(ref s3); // Passing ref s3 will not compile
Test3(out s2);
// s2 == "another value"
Test3(out s3);
// s3 == "another value"
Edit: As dp points out, the difference between out and ref is only enforced by the C# compiler, not by the CLR. As far as I know, VB has no equivalent for out and implements ref (as ByRef) only, matching the support of the CLR.
A:
One additional note about ref vs. out: The distinction between the two is enforced by the C# compiler. The CLR does not distinguish between between out and ref. This means that you cannot have two methods whose signatures differ only by an out or ref
void foo(int value) {}
// Only one of the following would be allowed
// valid to overload with ref
void foo(ref int value) {}
// OR with out
void foo(out int value) {}
A:
out means that the parameter will be initialised by the method:
int result; //not initialised
if( int.TryParse( "123", out result ) )
//result is now 123
else
//if TryParse failed result has still be
// initialised to its default value (0)
ref will force the underlying reference to be passed:
void ChangeMyClass1( MyClass input ) {
input.MyProperty = "changed by 1";
input = null;
//can't see input anymore ...
// I've only nulled my local scope's reference
}
void ChangeMyClass2( ref MyClass input ) {
input.MyProperty = "changed by 2";
input = null;
//the passed reference is now null too.
}
MyClass tester = new MyClass { MyProperty = "initial value" };
ChangeMyClass1( tester );
// now tester.MyProperty is "changed by 1"
ChangeMyClass2( ref tester );
// now tester is null
A:
One of my own questions at stackoverflow handles this topic too.
It handles about "pass by reference" and "pass by value" in different types of languages, c# is included so maybe you can find some extra information there as well.
Basically it comes down to:
ref: the parameter with the ref keyword will be passed by reference
out: the parameter with the out keyword will be treated as an output parameter
but that's really the most basic answer you can give, as it is a little more complex than it is stated here
| What do ref, val and out mean on method parameters? | I'm looking for a clear, concise and accurate answer.
Ideally as the actual answer, although links to good explanations welcome.
This also applies to VB.Net, but the keywords are different - ByRef and ByVal.
| [
"By default (in C#), passing an object to a function actually passes a copy of the reference to that object. Changing the parameter itself only changes the value in the parameter, and not the variable that was specified.\nvoid Test1(string param)\n{\n param = \"new value\";\n}\n\nstring s1 = \"initial value\";\nTest1(s1);\n// s1 == \"initial value\"\n\nUsing out or ref passes a reference to the variable specified in the call to the function. Any changes to the value of an out or ref parameter will be passed back to the caller.\nBoth out and ref behave identically except for one slight difference: ref parameters are required to be initialised before calling, while out parameters can be uninitialised. By extension, ref parameters are guaranteed to be initialised at the start of the method, while out parameters are treated as uninitialised.\nvoid Test2(ref string param)\n{\n param = \"new value\";\n}\n\nvoid Test3(out string param)\n{\n // Use of param here will not compile\n param = \"another value\";\n}\n\nstring s2 = \"initial value\";\nstring s3;\nTest2(ref s2);\n// s2 == \"new value\"\n// Test2(ref s3); // Passing ref s3 will not compile\nTest3(out s2);\n// s2 == \"another value\"\nTest3(out s3);\n// s3 == \"another value\"\n\nEdit: As dp points out, the difference between out and ref is only enforced by the C# compiler, not by the CLR. As far as I know, VB has no equivalent for out and implements ref (as ByRef) only, matching the support of the CLR.\n",
"One additional note about ref vs. out: The distinction between the two is enforced by the C# compiler. The CLR does not distinguish between between out and ref. This means that you cannot have two methods whose signatures differ only by an out or ref\nvoid foo(int value) {}\n\n// Only one of the following would be allowed\n\n// valid to overload with ref\nvoid foo(ref int value) {}\n\n// OR with out\nvoid foo(out int value) {}\n\n",
"out means that the parameter will be initialised by the method:\nint result; //not initialised\n\nif( int.TryParse( \"123\", out result ) )\n //result is now 123\nelse\n //if TryParse failed result has still be \n // initialised to its default value (0)\n\nref will force the underlying reference to be passed:\nvoid ChangeMyClass1( MyClass input ) {\n input.MyProperty = \"changed by 1\";\n input = null;\n //can't see input anymore ... \n // I've only nulled my local scope's reference\n}\n\nvoid ChangeMyClass2( ref MyClass input ) {\n input.MyProperty = \"changed by 2\";\n input = null;\n //the passed reference is now null too.\n}\n\nMyClass tester = new MyClass { MyProperty = \"initial value\" };\n\nChangeMyClass1( tester );\n// now tester.MyProperty is \"changed by 1\"\n\nChangeMyClass2( ref tester );\n// now tester is null\n\n",
"One of my own questions at stackoverflow handles this topic too.\nIt handles about \"pass by reference\" and \"pass by value\" in different types of languages, c# is included so maybe you can find some extra information there as well.\nBasically it comes down to:\n\nref: the parameter with the ref keyword will be passed by reference\nout: the parameter with the out keyword will be treated as an output parameter\n\nbut that's really the most basic answer you can give, as it is a little more complex than it is stated here\n"
] | [
27,
8,
3,
2
] | [] | [] | [
".net",
"c#",
"vb.net"
] | stackoverflow_0000013060_.net_c#_vb.net.txt |
Q:
Does the OutputCacheFilter in the Microsoft MVC Preview 4 actually save on action invocations?
We deployed a live, fresh, swanky site using preview 3 including rigorous stress testing.
Page Output caching was our saviour and afforded us the ability to meet our performance contractual requirements.
My question is, is there a difference between action OutputCacheFilter and page output caching?
Is the action output caching faster than page output caching?
A:
Internally, the OutputCacheAttribute (aka output cache filter) uses the same internal mechanism as page output caching (aka the @OutputCache directive).
Therefore, it's not any faster than page output caching. However, with MVC, you really can't use page output caching via the @OutputCache directive in MVC because we render the view (aka page) after the action runs. So you would gain very little benefit.
With the output cache filter, it does the correct thing and does not execute the action code if the result is in the output cache. Hope that helps. :)
A:
Just be aware that there currently is a bug if you call Html.RenderAction(..) on an Action that is marked to be cached. Instead of the specific action being cached, the entire page gets cached. I reported this on codeplex already and it seems to be a known issue:
Calling <% HTML.RenderAction<...>(...); %> to an Action with [OutputCache(..)] causes entire page to cache.
| Does the OutputCacheFilter in the Microsoft MVC Preview 4 actually save on action invocations? | We deployed a live, fresh, swanky site using preview 3 including rigorous stress testing.
Page Output caching was our saviour and afforded us the ability to meet our performance contractual requirements.
My question is, is there a difference between action OutputCacheFilter and page output caching?
Is the action output caching faster than page output caching?
| [
"Internally, the OutputCacheAttribute (aka output cache filter) uses the same internal mechanism as page output caching (aka the @OutputCache directive).\nTherefore, it's not any faster than page output caching. However, with MVC, you really can't use page output caching via the @OutputCache directive in MVC because we render the view (aka page) after the action runs. So you would gain very little benefit.\nWith the output cache filter, it does the correct thing and does not execute the action code if the result is in the output cache. Hope that helps. :)\n",
"Just be aware that there currently is a bug if you call Html.RenderAction(..) on an Action that is marked to be cached. Instead of the specific action being cached, the entire page gets cached. I reported this on codeplex already and it seems to be a known issue:\nCalling <% HTML.RenderAction<...>(...); %> to an Action with [OutputCache(..)] causes entire page to cache.\n"
] | [
3,
1
] | [] | [] | [
"asp.net",
"asp.net_mvc",
"c#",
"outputcache"
] | stackoverflow_0000010661_asp.net_asp.net_mvc_c#_outputcache.txt |
Q:
Best practice for webservices
I've created a webservice and when I want to use its methods I instantiate it in the a procedure, call the method, and I finally I dispose it, however I think also it could be okay to instantiate the webservice in the "private void Main_Load(object sender, EventArgs e)" event.
The thing is that if I do it the first way I have to instantiate the webservice every time I need one of its methods but in the other way I have to keep a webservice connected all the time when I use it in a form for example.
I would like to know which of these practices are better or if there's a much better way to do it
Strategy 1
private void btnRead_Click(object sender, EventArgs e)
{
try
{
//Show clock
this.picResult.Image = new Bitmap(pathWait);
Application.DoEvents();
//Connect to webservice
svc = new ForPocketPC.ServiceForPocketPC();
svc.Credentials = new System.Net.NetworkCredential(Settings.UserName, Settings.Password);
svc.AllowAutoRedirect = false;
svc.UserAgent = Settings.UserAgent;
svc.PreAuthenticate = true;
svc.Url = Settings.Url;
svc.Timeout = System.Threading.Timeout.Infinite;
svc.CallMethod();
...
}
catch (Exception ex)
{
ShowError(ex);
}
finally
{
if (svc != null)
svc.Dispose();
}
}
Strategy 2
private myWebservice svc;
private void Main_Load(object sender, EventArgs e)
{
//Connect to webservice
svc = new ForPocketPC.ServiceForPocketPC();
svc.Credentials = new System.Net.NetworkCredential(Settings.UserName, Settings.Password);
svc.AllowAutoRedirect = false;
svc.UserAgent = Settings.UserAgent;
svc.PreAuthenticate = true;
svc.Url = Settings.Url;
svc.Timeout = System.Threading.Timeout.Infinite;
}
private void btnRead_Click(object sender, EventArgs e)
{
try
{
//Show clock
this.picResult.Image = new Bitmap(pathWait);
Application.DoEvents();
svc.CallMethod();
...
}
catch (Exception ex)
{
ShowError(ex);
}
}
private void Main_Closing(object sender, CancelEventArgs e)
{
svc.Dispose();
}
A:
It depends on how often you are going to be calling the web service. If you're going to be calling it almost constantly, it would probably be better to use method #2. However, if it's not going to be getting called quite so often, you are better off using method #1, and only instantiating it when you need it.
A:
Right now I made a solution for a mobile device and it turns to be used on irregular times, it could be used in 10 minutes, 1 hour, 4 hours its very variable, it seems that the better aproach is the first strategy.
Last year we went on a project where we used webservices, the fact is that we instantiated our webservices at the Sub New() procedure and it run it very well, however, sometimes some users claimed at us that they woke up from their chairs and when they returned and tried to continue on the application they received a timeout error message and they had to re-login again.
We thougth that maybe that was Ok because maybe the users went out for a very long time out of their seats, but once in a presentation of the application with the CEOs it happened exactly the same scenario and personally I didn't like that behaviour and that's why the question.
Thanks for the answer.
| Best practice for webservices | I've created a webservice and when I want to use its methods I instantiate it in the a procedure, call the method, and I finally I dispose it, however I think also it could be okay to instantiate the webservice in the "private void Main_Load(object sender, EventArgs e)" event.
The thing is that if I do it the first way I have to instantiate the webservice every time I need one of its methods but in the other way I have to keep a webservice connected all the time when I use it in a form for example.
I would like to know which of these practices are better or if there's a much better way to do it
Strategy 1
private void btnRead_Click(object sender, EventArgs e)
{
try
{
//Show clock
this.picResult.Image = new Bitmap(pathWait);
Application.DoEvents();
//Connect to webservice
svc = new ForPocketPC.ServiceForPocketPC();
svc.Credentials = new System.Net.NetworkCredential(Settings.UserName, Settings.Password);
svc.AllowAutoRedirect = false;
svc.UserAgent = Settings.UserAgent;
svc.PreAuthenticate = true;
svc.Url = Settings.Url;
svc.Timeout = System.Threading.Timeout.Infinite;
svc.CallMethod();
...
}
catch (Exception ex)
{
ShowError(ex);
}
finally
{
if (svc != null)
svc.Dispose();
}
}
Strategy 2
private myWebservice svc;
private void Main_Load(object sender, EventArgs e)
{
//Connect to webservice
svc = new ForPocketPC.ServiceForPocketPC();
svc.Credentials = new System.Net.NetworkCredential(Settings.UserName, Settings.Password);
svc.AllowAutoRedirect = false;
svc.UserAgent = Settings.UserAgent;
svc.PreAuthenticate = true;
svc.Url = Settings.Url;
svc.Timeout = System.Threading.Timeout.Infinite;
}
private void btnRead_Click(object sender, EventArgs e)
{
try
{
//Show clock
this.picResult.Image = new Bitmap(pathWait);
Application.DoEvents();
svc.CallMethod();
...
}
catch (Exception ex)
{
ShowError(ex);
}
}
private void Main_Closing(object sender, CancelEventArgs e)
{
svc.Dispose();
}
| [
"It depends on how often you are going to be calling the web service. If you're going to be calling it almost constantly, it would probably be better to use method #2. However, if it's not going to be getting called quite so often, you are better off using method #1, and only instantiating it when you need it.\n",
"Right now I made a solution for a mobile device and it turns to be used on irregular times, it could be used in 10 minutes, 1 hour, 4 hours its very variable, it seems that the better aproach is the first strategy.\nLast year we went on a project where we used webservices, the fact is that we instantiated our webservices at the Sub New() procedure and it run it very well, however, sometimes some users claimed at us that they woke up from their chairs and when they returned and tried to continue on the application they received a timeout error message and they had to re-login again.\nWe thougth that maybe that was Ok because maybe the users went out for a very long time out of their seats, but once in a presentation of the application with the CEOs it happened exactly the same scenario and personally I didn't like that behaviour and that's why the question.\nThanks for the answer.\n"
] | [
2,
0
] | [] | [] | [
"web_services"
] | stackoverflow_0000013160_web_services.txt |
Q:
Upgrade to ASP.NET 3.x
I am currently aware that ASP.NET 2.0 is out and about and that there are 3.x versions of the .Net Framework.
Is it possible to upgrade my ASP.NET web server to version 3.x of the .Net Framework?
I have tried this, however, when selecting which version of the .Net framwork to use in IIS (the ASP.NET Tab), only version 1.1 and 2.0 show.
Is there a work around?
A:
if I install 3.5 and have IIS setup to use 2.0. I will be able to use 3.5 features?
Yes, that is correct. You have IIS set to 2.0 for both 2.0 and 3.5 sites, as they both run on the same CLR. 3.5 uses a different compile method than 2.0. This is declared in the web.config for the site. See this post for more details on this. But the setup in IIS for both 3.5 and 2.0 ASP.net sites is identical.
A:
Unfortunately, the statement .NET versions can be installed side-by-side, so it won't disrupt any "legacy" apps isn't entirely true. If you install 3.5, it requires 2.0 SP1, which can disrupt legacy applications that uses 2.0 and connects to Oracle database servers.
A:
Sure, download the 3.5 redistributable, install it on the servre, and you're good to go. .NET versions can be installed side-by-side, so it won't disrupt any "legacy" apps.
http://www.microsoft.com/downloads/details.aspx?FamilyId=333325FD-AE52-4E35-B531-508D977D32A6&displaylang=en
A:
GateKiller,
.NET 3.0 and .NET 3.5 did not change the version of the CLR, so "using ASP.NET 3.5" is a more complicated thing that it sounds like it should be at first. In essence, you're still running on the 2.0 CLR, but you're using the C# 3.0 compiler and linking against the 3.5 libraries. It means adding a bunch of stuff to your Web.config file to become an ASP.NET 3.5 project.
Scott Hanselman has an awesome blog post covering the details:
http://www.hanselman.com/blog/HowToSetAnIISApplicationOrAppPoolToUseASPNET35RatherThan20.aspx
A:
The version you are selecting in IIS is the version of the CLR to use. There are only two versions of the CLR. The .NET Framework 3.5 runs on CLR 2.0
A:
The new framework is .Net 3.5, you'll have a new assembly System.Core, + a few more if you use features like Linq
.Net 3.5 comes with the new C#3.0 compiler
ASP.Net is still version 2.0
Lovely and confusing isn't it ;-)
You should upgrade the .Net framework on the server to .Net 3.5 SP1, but you're still going to be running ASP.Net 2.0
| Upgrade to ASP.NET 3.x | I am currently aware that ASP.NET 2.0 is out and about and that there are 3.x versions of the .Net Framework.
Is it possible to upgrade my ASP.NET web server to version 3.x of the .Net Framework?
I have tried this, however, when selecting which version of the .Net framwork to use in IIS (the ASP.NET Tab), only version 1.1 and 2.0 show.
Is there a work around?
| [
"\nif I install 3.5 and have IIS setup to use 2.0. I will be able to use 3.5 features?\n\nYes, that is correct. You have IIS set to 2.0 for both 2.0 and 3.5 sites, as they both run on the same CLR. 3.5 uses a different compile method than 2.0. This is declared in the web.config for the site. See this post for more details on this. But the setup in IIS for both 3.5 and 2.0 ASP.net sites is identical.\n",
"Unfortunately, the statement .NET versions can be installed side-by-side, so it won't disrupt any \"legacy\" apps isn't entirely true. If you install 3.5, it requires 2.0 SP1, which can disrupt legacy applications that uses 2.0 and connects to Oracle database servers.\n",
"Sure, download the 3.5 redistributable, install it on the servre, and you're good to go. .NET versions can be installed side-by-side, so it won't disrupt any \"legacy\" apps.\nhttp://www.microsoft.com/downloads/details.aspx?FamilyId=333325FD-AE52-4E35-B531-508D977D32A6&displaylang=en\n",
"GateKiller,\n.NET 3.0 and .NET 3.5 did not change the version of the CLR, so \"using ASP.NET 3.5\" is a more complicated thing that it sounds like it should be at first. In essence, you're still running on the 2.0 CLR, but you're using the C# 3.0 compiler and linking against the 3.5 libraries. It means adding a bunch of stuff to your Web.config file to become an ASP.NET 3.5 project.\nScott Hanselman has an awesome blog post covering the details:\nhttp://www.hanselman.com/blog/HowToSetAnIISApplicationOrAppPoolToUseASPNET35RatherThan20.aspx\n",
"The version you are selecting in IIS is the version of the CLR to use. There are only two versions of the CLR. The .NET Framework 3.5 runs on CLR 2.0\n",
"The new framework is .Net 3.5, you'll have a new assembly System.Core, + a few more if you use features like Linq\n.Net 3.5 comes with the new C#3.0 compiler\nASP.Net is still version 2.0\nLovely and confusing isn't it ;-)\nYou should upgrade the .Net framework on the server to .Net 3.5 SP1, but you're still going to be running ASP.Net 2.0\n"
] | [
5,
2,
1,
1,
0,
0
] | [] | [] | [
".net_3.5",
"asp.net"
] | stackoverflow_0000004627_.net_3.5_asp.net.txt |
Q:
A ThreadStateException occures when trying to restart a thread
From time to time I get a System.Threading.ThreadStateException when attempting to restart a thread. The code in question is as follows:
// Make sure the thread is done stopping
while (this.mThread.ThreadState == ThreadState.Running)
{
Thread.Sleep(0);
}
// Respawn a thread if the current one is stopped or doesn't exist
if (this.mThread == null || this.mThread.ThreadState == ThreadState.Stopped)
{
this.mThread = new Thread(new ParameterizedThreadStart(Monitor)); }
// Start the thread
if (check)
{
this.mThread.Start(60000);
}
else
{
this.mThread.Start(0);
}
So two questions - is this the correct way of doing things, and it is, is there a way to prevent the error from occurring?
A:
It's possible for a thread to be in more than one state at once therefore the ThreadState property is actually a bitmap of possible states. So testing for equality with just one state will not give you the right result. You would need to do something like:
if((mThread.ThreadState & ThreadState.Running) != 0)
However, checking thread state is the wrong to do anything. I'm not entirely clear what you're trying to achieve but I will guess that you're waiting for a thread to terminate before restarting it. In that case you should do:
mThread.Join();
mThread = new Thread(new ParameterizedThreadStart(Monitor));
if(check)
mThread.Start(60000);
else
mThread.Start(0);
Although if you describe the problem you're trying to solve in more detail I'm almost certain there will be a better solution. Waiting around for a thread to end just to restart it again doesn't seem that efficient to me. Perhaps you just need some kind of inter-thread communication?
John.
A:
The problem is that you have code that first checks if it should create a new thread object, and another piece of code that determines wether to start the thread object. Due to race conditions and similar things, your code might end up trying to call .Start on an existing thread object. Considering you don't post the details behind the check variable, it's impossible to know what might trigger this behavior.
You should reorganize your code so that .Start is guaranteed to only be called on new objects. In short, you should put the Start method into the same if-statement as the one that creates a new thread object.
Personally, I would try to reorganize the entire code so that I didn't need to create another thread, but wrap the code inside the thread object inside a loop so that the thread just keeps on going.
A:
A ThreadStateException is thrown because you're trying to start a thread that's not in a startable state. The most likely situations would be that it's already running, or that it has fully exited.
There are potentially a couple things that might be happening. First is, the thread might have transitioned from Running to StopRequested, which isn't fully stopped yet, so your logic doesn't create a new thread, and you're trying to start a thread which has just finished running or is about to finish running (neither of which is a valid state for restarting).
The other possibility is that the thread was aborted. Threads which are aborted go to the Aborted state, not the Stopped state, and of course are also not valid for restarting.
Really, the only kind of thread that is still alive that can be "restarted" is one that's suspended. You might want to use this conditional instead:
if (this.mThread == null || this.mThread.ThreadState != ThreadState.Suspended)
| A ThreadStateException occures when trying to restart a thread | From time to time I get a System.Threading.ThreadStateException when attempting to restart a thread. The code in question is as follows:
// Make sure the thread is done stopping
while (this.mThread.ThreadState == ThreadState.Running)
{
Thread.Sleep(0);
}
// Respawn a thread if the current one is stopped or doesn't exist
if (this.mThread == null || this.mThread.ThreadState == ThreadState.Stopped)
{
this.mThread = new Thread(new ParameterizedThreadStart(Monitor)); }
// Start the thread
if (check)
{
this.mThread.Start(60000);
}
else
{
this.mThread.Start(0);
}
So two questions - is this the correct way of doing things, and it is, is there a way to prevent the error from occurring?
| [
"It's possible for a thread to be in more than one state at once therefore the ThreadState property is actually a bitmap of possible states. So testing for equality with just one state will not give you the right result. You would need to do something like:\nif((mThread.ThreadState & ThreadState.Running) != 0)\n\nHowever, checking thread state is the wrong to do anything. I'm not entirely clear what you're trying to achieve but I will guess that you're waiting for a thread to terminate before restarting it. In that case you should do:\nmThread.Join();\nmThread = new Thread(new ParameterizedThreadStart(Monitor));\nif(check)\n mThread.Start(60000);\nelse\n mThread.Start(0);\n\nAlthough if you describe the problem you're trying to solve in more detail I'm almost certain there will be a better solution. Waiting around for a thread to end just to restart it again doesn't seem that efficient to me. Perhaps you just need some kind of inter-thread communication?\nJohn.\n",
"The problem is that you have code that first checks if it should create a new thread object, and another piece of code that determines wether to start the thread object. Due to race conditions and similar things, your code might end up trying to call .Start on an existing thread object. Considering you don't post the details behind the check variable, it's impossible to know what might trigger this behavior.\nYou should reorganize your code so that .Start is guaranteed to only be called on new objects. In short, you should put the Start method into the same if-statement as the one that creates a new thread object.\nPersonally, I would try to reorganize the entire code so that I didn't need to create another thread, but wrap the code inside the thread object inside a loop so that the thread just keeps on going.\n",
"A ThreadStateException is thrown because you're trying to start a thread that's not in a startable state. The most likely situations would be that it's already running, or that it has fully exited.\nThere are potentially a couple things that might be happening. First is, the thread might have transitioned from Running to StopRequested, which isn't fully stopped yet, so your logic doesn't create a new thread, and you're trying to start a thread which has just finished running or is about to finish running (neither of which is a valid state for restarting).\nThe other possibility is that the thread was aborted. Threads which are aborted go to the Aborted state, not the Stopped state, and of course are also not valid for restarting.\nReally, the only kind of thread that is still alive that can be \"restarted\" is one that's suspended. You might want to use this conditional instead:\nif (this.mThread == null || this.mThread.ThreadState != ThreadState.Suspended)\n"
] | [
6,
3,
1
] | [] | [] | [
".net",
"c#",
"exception",
"multithreading"
] | stackoverflow_0000013170_.net_c#_exception_multithreading.txt |
Q:
How to trace COM objects exceptions?
I have a DLL with some COM objects. Sometimes, this objects crashes and register an error event in the Windows Event Log with lots of hexadecimal informations. I have no clue why this crashes happens.
So, How can I trace those COM objects exceptions?
A:
The first step is to lookup the Fail code's hex value (E.G. E_FAIL 0x80004005). I've had really good luck with posting that value in Google to get a sense of what the error code means.
Then, I just use trial and error to try to isolate the location in code that's failing, and the root cause of the failure.
A:
If you just want a really quick way to find out what the error code means, you could use the "Error Lookup" tool packaged with Visual Studio (details here). Enter the hex value, and it will give you the string describing that error code.
Of course, once you know that, you've still got to figure out why it's happening.
A:
A good way to look up error (hresult) codes is HResult Plus or welt.exe (Windows Error Lookup Tool).
I use logging internally in the COM-classes to see what is going on. Also, once the COM-class is loaded by the executable, you can attach the VS debugger to it and debug the COM code with breakpoints, watches, and all that fun stuff.
A:
COM objects don't throw exceptions. They return HRESULTs, most of which indicate a failure. So if you're looking for the equivalent of an exception stack trace, you're out of luck. You're going to have to walk through the code by hand and figure out what's going on.
| How to trace COM objects exceptions? | I have a DLL with some COM objects. Sometimes, this objects crashes and register an error event in the Windows Event Log with lots of hexadecimal informations. I have no clue why this crashes happens.
So, How can I trace those COM objects exceptions?
| [
"The first step is to lookup the Fail code's hex value (E.G. E_FAIL 0x80004005). I've had really good luck with posting that value in Google to get a sense of what the error code means. \nThen, I just use trial and error to try to isolate the location in code that's failing, and the root cause of the failure. \n",
"If you just want a really quick way to find out what the error code means, you could use the \"Error Lookup\" tool packaged with Visual Studio (details here). Enter the hex value, and it will give you the string describing that error code.\nOf course, once you know that, you've still got to figure out why it's happening.\n",
"A good way to look up error (hresult) codes is HResult Plus or welt.exe (Windows Error Lookup Tool).\nI use logging internally in the COM-classes to see what is going on. Also, once the COM-class is loaded by the executable, you can attach the VS debugger to it and debug the COM code with breakpoints, watches, and all that fun stuff.\n",
"COM objects don't throw exceptions. They return HRESULTs, most of which indicate a failure. So if you're looking for the equivalent of an exception stack trace, you're out of luck. You're going to have to walk through the code by hand and figure out what's going on.\n"
] | [
2,
1,
1,
1
] | [] | [] | [
"com",
"dll",
"exception",
"trace"
] | stackoverflow_0000010935_com_dll_exception_trace.txt |
Q:
How do I integrate my continuous integration system with my bug tracking system?
I use cruisecontrol.rb for CI and FogBugz for bug tracking, but the more general the answers, the better.
First is the technical problem: is there an API for FogBugz? Are there good tutorials, or better yet, pre-written code?
Second is the procedural problem: what, exactly, should the CI put in the bug tracker when the build breaks? Perhaps:
Title: "#{last committer} broke the build!"
Body: "#{ error traces }"
I suppose this presupposes the answer to this question: should I even put CI breaks into my bug tracking?
A:
At my company we've recently adopted the (commercial) Atlassian stack - including JIRA for issue tracking and Bamboo for builds. Much like the Microsoft world (I'm guessing - we're a Java shop), if you get all your products from a single vendor you get the bonus of tight integration.
For an example of how they've done interoperability, view their interoperability page.
Enough shilling. Generally speaking, I can summarize their general approach as:
Create issues in your bug tracker (ex: issue key of PROJ-123).
When you commit code, add "PROJ-123" to your commit comment to indicate what bug this code change fixes.
When your CI server checks out the code, scan the commit comments of the diffs. Record any strings matching the regex of your issue keys.
When the build completes, generate a report of what issue keys were found.
Specifically to your second problem:
Your CI doesn't doesn't have to put anything into your bug tracker. Bamboo doesn't put anything into JIRA. Instead, the Atlassian folks have provided a plugin to JIRA that will make a remote api call into Bamboo, asking the question "Bamboo, to what builds am I (a JIRA issue) related?". This is probably best explained with a screenshot.
A:
All the CI setups I've worked with send an email (to a list), but if you did want—especially if your team uses FogBugz much as a todo system—you could just open a case in FogBugz 6. It has an API that lets you open cases. For that matter, you could just configure it to send the email to your FogBugz' email submission address, but the API might let you do more, like assign the case to the last committer.
Brian's answer suggests to me, if your CI finds a failure in a commit that had a case number, you might even just reopen the existing case. Like codifying a case field for every little thing, though, there's a point where the CI automation could be "too smart," get it wrong, and just be annoying. Opening a new case could be plenty.
And thanks: this makes me wonder if I should try integrating our Chimps setup with our FogBugz!
A:
CC comes with a utility that warns you when builds fail, it probably isn't worth logging the failing build in FogBugz - you don't need to track issues that are immediately resolved (as most broken builds will be)
To go the other way round (FogBugz showing checkins that fixed the issue) you need a web based repository browser - FogBugz is easy to configure so that it shows the right changes.
| How do I integrate my continuous integration system with my bug tracking system? | I use cruisecontrol.rb for CI and FogBugz for bug tracking, but the more general the answers, the better.
First is the technical problem: is there an API for FogBugz? Are there good tutorials, or better yet, pre-written code?
Second is the procedural problem: what, exactly, should the CI put in the bug tracker when the build breaks? Perhaps:
Title: "#{last committer} broke the build!"
Body: "#{ error traces }"
I suppose this presupposes the answer to this question: should I even put CI breaks into my bug tracking?
| [
"At my company we've recently adopted the (commercial) Atlassian stack - including JIRA for issue tracking and Bamboo for builds. Much like the Microsoft world (I'm guessing - we're a Java shop), if you get all your products from a single vendor you get the bonus of tight integration.\nFor an example of how they've done interoperability, view their interoperability page.\nEnough shilling. Generally speaking, I can summarize their general approach as:\n\nCreate issues in your bug tracker (ex: issue key of PROJ-123).\nWhen you commit code, add \"PROJ-123\" to your commit comment to indicate what bug this code change fixes.\nWhen your CI server checks out the code, scan the commit comments of the diffs. Record any strings matching the regex of your issue keys.\nWhen the build completes, generate a report of what issue keys were found.\n\nSpecifically to your second problem:\nYour CI doesn't doesn't have to put anything into your bug tracker. Bamboo doesn't put anything into JIRA. Instead, the Atlassian folks have provided a plugin to JIRA that will make a remote api call into Bamboo, asking the question \"Bamboo, to what builds am I (a JIRA issue) related?\". This is probably best explained with a screenshot.\n",
"All the CI setups I've worked with send an email (to a list), but if you did want—especially if your team uses FogBugz much as a todo system—you could just open a case in FogBugz 6. It has an API that lets you open cases. For that matter, you could just configure it to send the email to your FogBugz' email submission address, but the API might let you do more, like assign the case to the last committer.\nBrian's answer suggests to me, if your CI finds a failure in a commit that had a case number, you might even just reopen the existing case. Like codifying a case field for every little thing, though, there's a point where the CI automation could be \"too smart,\" get it wrong, and just be annoying. Opening a new case could be plenty.\nAnd thanks: this makes me wonder if I should try integrating our Chimps setup with our FogBugz! \n",
"CC comes with a utility that warns you when builds fail, it probably isn't worth logging the failing build in FogBugz - you don't need to track issues that are immediately resolved (as most broken builds will be)\nTo go the other way round (FogBugz showing checkins that fixed the issue) you need a web based repository browser - FogBugz is easy to configure so that it shows the right changes.\n"
] | [
4,
3,
0
] | [] | [] | [
"bug_tracking",
"continuous_integration",
"cruisecontrol.rb",
"fogbugz"
] | stackoverflow_0000013200_bug_tracking_continuous_integration_cruisecontrol.rb_fogbugz.txt |
Q:
How to render a control to look like ComboBox with Visual Styles enabled?
I have a control that is modelled on a ComboBox. I want to render the control so that the control border looks like that of a standard Windows ComboBox. Specifically, I have followed the MSDN documentation and all the rendering of the control is correct except for rendering when the control is disabled.
Just to be clear, this is for a system with Visual Styles enabled. Also, all parts of the control render properly except the border around a disabled control, which does not match the disabled ComboBox border colour.
I am using the VisualStyleRenderer class. MSDN suggests using the VisualStyleElement.TextBox element for the TextBox part of the ComboBox control but a standard disabled TextBox and a standard disabled ComboBox draw slightly differently (one has a light grey border, the other a light blue border).
How can I get correct rendering of the control in a disabled state?
A:
I'm not 100% sure if this is what you are looking for but you should check out the VisualStyleRenderer in the System.Windows.Forms.VisualStyles-namespace.
VisualStyleRenderer class (MSDN)
How to: Render a Visual Style Element (MSDN)
VisualStyleElement.ComboBox.DropDownButton.Disabled (MSDN)
Since VisualStyleRenderer won't work if the user don't have visual styles enabled (he/she might be running 'classic mode' or an operative system prior to Windows XP) you should always have a fallback to the ControlPaint class.
// Create the renderer.
if (VisualStyleInformation.IsSupportedByOS
&& VisualStyleInformation.IsEnabledByUser)
{
renderer = new VisualStyleRenderer(
VisualStyleElement.ComboBox.DropDownButton.Disabled);
}
and then do like this when drawing:
if(renderer != null)
{
// Use visual style renderer.
}
else
{
// Use ControlPaint renderer.
}
Hope it helps!
A:
Are any of the ControlPaint methods useful for this? That's what I usually use for custom-rendered controls.
| How to render a control to look like ComboBox with Visual Styles enabled? | I have a control that is modelled on a ComboBox. I want to render the control so that the control border looks like that of a standard Windows ComboBox. Specifically, I have followed the MSDN documentation and all the rendering of the control is correct except for rendering when the control is disabled.
Just to be clear, this is for a system with Visual Styles enabled. Also, all parts of the control render properly except the border around a disabled control, which does not match the disabled ComboBox border colour.
I am using the VisualStyleRenderer class. MSDN suggests using the VisualStyleElement.TextBox element for the TextBox part of the ComboBox control but a standard disabled TextBox and a standard disabled ComboBox draw slightly differently (one has a light grey border, the other a light blue border).
How can I get correct rendering of the control in a disabled state?
| [
"I'm not 100% sure if this is what you are looking for but you should check out the VisualStyleRenderer in the System.Windows.Forms.VisualStyles-namespace.\n\nVisualStyleRenderer class (MSDN)\nHow to: Render a Visual Style Element (MSDN)\nVisualStyleElement.ComboBox.DropDownButton.Disabled (MSDN)\n\nSince VisualStyleRenderer won't work if the user don't have visual styles enabled (he/she might be running 'classic mode' or an operative system prior to Windows XP) you should always have a fallback to the ControlPaint class.\n// Create the renderer.\nif (VisualStyleInformation.IsSupportedByOS \n && VisualStyleInformation.IsEnabledByUser) \n{\n renderer = new VisualStyleRenderer(\n VisualStyleElement.ComboBox.DropDownButton.Disabled);\n}\n\nand then do like this when drawing:\nif(renderer != null)\n{\n // Use visual style renderer.\n}\nelse\n{\n // Use ControlPaint renderer.\n}\n\nHope it helps!\n",
"Are any of the ControlPaint methods useful for this? That's what I usually use for custom-rendered controls.\n"
] | [
11,
1
] | [] | [] | [
".net",
"c#",
"winforms"
] | stackoverflow_0000002874_.net_c#_winforms.txt |
Q:
How can I refactor HTML markup out of my property files?
I've recently inherited a internationalized and text-heavy Struts 1.1 web application. Many of the JSP files look like:
<p>
<bean:message key="alert" />
</p>
and the properties files look like:
messages.properties
alert=Please update your <a href="/address.do">address</a> and <a href="/contact.do">contact information</a>.
with the appropriate translations in N other languages (messages_fr.properties, etc).
Problems:
DRY violation - I have N references to my Struts action URLs instead of 1, which makes refactoring action URLs error-prone.
Mixed concerns - My application's markup is now in more than just my JSP files, making it difficult for a web specialist to tweak the markup (using CSS, etc).
Post-translation markup - Anytime I receive newly-translated text, I must decide what to surround with the <a>...</a> markup. Easy for English but less so for unfamiliar languages.
I've considered adding placeholders in the messages file, like:
alert=Please update your {0} and {1}.
but then the words "address" and "contact information" would somehow need to be localized, wrapped with markup, and passed to my message tag - and I can't see an easy way to do it.
What can I do to improve this?
A:
Avoid creating links within long
blocks of text. Prefer shorter text
that can act as a logically complete
and independent link.
Generally, it will lead to fewer problems. Sometimes you have to compromise your UI design to accommodate localization; sometimes you need to compromise your localization process to accommodate the UI.
Any time a developer manually manipulates post-translation strings is a source of potentially expensive bugs. Cutting/pasting or string editing can result in character corruption, misplaced strings, etc. A translation defect needs the participation of outside parties to fix which involves cost and takes time.
Thinking on it, something like this might be less ugly:
<p>Please update your address and contact information.
<br />
<a href="/address.do">update address</a>
<br />
<a href="/contact.do">update contact information</a></p>
...but I'm no UI designer.
A:
One approach that comes to mind is that you could store the translated replacement parameters i.e. "address" and "contact information" in a separate properties file, one per locale. Then have your Action class (or probably some helper class) look up the values from the correct ResourceBundle for the current locale and pass them to the message tag.
A:
Perhaps:
#
alert=Please update your {0}address{1} and {2}contact information{3}.
A:
The message message tag API allows
only 5 parametric arguments
Ah! I blame my complete ignorance of the Struts API.
To quote the manual:
Some of the features in this taglib
are also available in the JavaServer
Pages Standard Tag Library (JSTL). The
Struts team encourages the use of the
standard tags over the Struts specific
tags when possible.
You could probably do this with the http://java.sun.com/jsp/jstl/fmt taglib.
<fmt:bundle basename="messages">
<fmt:message key="alert">
<fmt:param value='<a href="/">' />
<fmt:param value="</a>" />
<fmt:param value='<a href="/">' />
<fmt:param value="</a>" />
</fmt:message>
</fmt:bundle>
The downside is that this isn't valid XML and yanking the values to variables involves more indirection, lookups and verbosity. This is not a good solution.
I don't know Struts, but if it is anything like JavaServer Faces (same architect), then there is probably support for configuring a replacement control. I would either replace the existing control with a more flexible one or add a new one.
Anytime I receive newly-translated
text, I must decide what to surround
with the <a>...</a> markup.
There is no way you should be doing this and I see this as a fault in your translation process (I am an ex-localization engineer and ex-developer of localization tools). The {0} characters should be included in the files that are sent to the translators. The localization guidelines should explain the string's context and the meaning of any variables.
You can programmatically validate the property bundles on return. String-specific regex's might do the trick. It isn't outside the realms of possibility that "address" and "contact information" would swap order during translation.
The simplest solution is to redesign the messages to render:
<a href="/address.do">Please update your address.</a>
<a href="/contact.do">Please update your contact information.</a>
I accept that this might not be a solution in all cases and may have your UI designer spitting teeth.
| How can I refactor HTML markup out of my property files? | I've recently inherited a internationalized and text-heavy Struts 1.1 web application. Many of the JSP files look like:
<p>
<bean:message key="alert" />
</p>
and the properties files look like:
messages.properties
alert=Please update your <a href="/address.do">address</a> and <a href="/contact.do">contact information</a>.
with the appropriate translations in N other languages (messages_fr.properties, etc).
Problems:
DRY violation - I have N references to my Struts action URLs instead of 1, which makes refactoring action URLs error-prone.
Mixed concerns - My application's markup is now in more than just my JSP files, making it difficult for a web specialist to tweak the markup (using CSS, etc).
Post-translation markup - Anytime I receive newly-translated text, I must decide what to surround with the <a>...</a> markup. Easy for English but less so for unfamiliar languages.
I've considered adding placeholders in the messages file, like:
alert=Please update your {0} and {1}.
but then the words "address" and "contact information" would somehow need to be localized, wrapped with markup, and passed to my message tag - and I can't see an easy way to do it.
What can I do to improve this?
| [
"\nAvoid creating links within long\n blocks of text. Prefer shorter text\n that can act as a logically complete\n and independent link.\n\nGenerally, it will lead to fewer problems. Sometimes you have to compromise your UI design to accommodate localization; sometimes you need to compromise your localization process to accommodate the UI.\nAny time a developer manually manipulates post-translation strings is a source of potentially expensive bugs. Cutting/pasting or string editing can result in character corruption, misplaced strings, etc. A translation defect needs the participation of outside parties to fix which involves cost and takes time.\nThinking on it, something like this might be less ugly:\n<p>Please update your address and contact information.\n<br />\n<a href=\"/address.do\">update address</a>\n<br />\n<a href=\"/contact.do\">update contact information</a></p>\n\n...but I'm no UI designer.\n",
"One approach that comes to mind is that you could store the translated replacement parameters i.e. \"address\" and \"contact information\" in a separate properties file, one per locale. Then have your Action class (or probably some helper class) look up the values from the correct ResourceBundle for the current locale and pass them to the message tag.\n",
"Perhaps:\n#\nalert=Please update your {0}address{1} and {2}contact information{3}.\n\n",
"\nThe message message tag API allows\n only 5 parametric arguments\n\nAh! I blame my complete ignorance of the Struts API.\nTo quote the manual: \n\nSome of the features in this taglib\n are also available in the JavaServer\n Pages Standard Tag Library (JSTL). The\n Struts team encourages the use of the\n standard tags over the Struts specific\n tags when possible.\n\nYou could probably do this with the http://java.sun.com/jsp/jstl/fmt taglib.\n<fmt:bundle basename=\"messages\">\n <fmt:message key=\"alert\">\n <fmt:param value='<a href=\"/\">' />\n <fmt:param value=\"</a>\" />\n <fmt:param value='<a href=\"/\">' />\n <fmt:param value=\"</a>\" />\n </fmt:message>\n</fmt:bundle>\n\nThe downside is that this isn't valid XML and yanking the values to variables involves more indirection, lookups and verbosity. This is not a good solution.\nI don't know Struts, but if it is anything like JavaServer Faces (same architect), then there is probably support for configuring a replacement control. I would either replace the existing control with a more flexible one or add a new one.\n\nAnytime I receive newly-translated\n text, I must decide what to surround\n with the <a>...</a> markup.\n\nThere is no way you should be doing this and I see this as a fault in your translation process (I am an ex-localization engineer and ex-developer of localization tools). The {0} characters should be included in the files that are sent to the translators. The localization guidelines should explain the string's context and the meaning of any variables.\nYou can programmatically validate the property bundles on return. String-specific regex's might do the trick. It isn't outside the realms of possibility that \"address\" and \"contact information\" would swap order during translation.\nThe simplest solution is to redesign the messages to render:\n<a href=\"/address.do\">Please update your address.</a>\n<a href=\"/contact.do\">Please update your contact information.</a>\n\nI accept that this might not be a solution in all cases and may have your UI designer spitting teeth.\n"
] | [
2,
1,
0,
0
] | [] | [] | [
"internationalization",
"java",
"jsp",
"struts"
] | stackoverflow_0000013225_internationalization_java_jsp_struts.txt |
Q:
Is there any wiki engine that supports page creation by email?
I want to consolidate all the loose information of the company I work for into a knowledge base. A wiki seems to be the way to go, but most of the relevant information is buried inside PST files, and it would take ages to convince people to manually translate their emails one by one (including attachments) into wiki pages. So I'm looking for a wiki engine that supports page creation by email, that is, capable of receiving email (supporting plain text, html and attachments) and then create the corresponding page. Supporting file indexing and looking for duplicates would be a huge bonus.
I tried with WikiMatrix, but didn't find what I was looking for. I wouldn’t mind to build my own engine (borrowing a couple of snippets here and there for MIME decoding), but I don’t think is that a rare problem so there is no implementation.
A:
Both Jotspot and MediaWiki allow you to do this. The latter has support for a lot of plugins, of which this is one. The format is essentially PageTitle@something. Jotspot is a hosted solution where you get your own email address, MediaWiki is self-hosted and you give it a mailbox to monitor for incoming.
Articles are appended to pages if they already exist, or a new page is created if it does not. This does require a degree of discipline for naming conventions, but is great for CC'ing.
We use MediaWiki here and I like it a lot. It has the same flaws as many other Wiki packages (e.g difficult to reorganize without orphaning pages) but is as good if not better than other Wiki packages I've used.
A:
I don't know if this is exactly what you're looking for, but I know many of 37 Signals' products support adding data through email. I use Highrise to keep track of some of my business correspondence, and I'm able to CC or forward emails to Highrise and they get added to the appropriate contact.
| Is there any wiki engine that supports page creation by email? | I want to consolidate all the loose information of the company I work for into a knowledge base. A wiki seems to be the way to go, but most of the relevant information is buried inside PST files, and it would take ages to convince people to manually translate their emails one by one (including attachments) into wiki pages. So I'm looking for a wiki engine that supports page creation by email, that is, capable of receiving email (supporting plain text, html and attachments) and then create the corresponding page. Supporting file indexing and looking for duplicates would be a huge bonus.
I tried with WikiMatrix, but didn't find what I was looking for. I wouldn’t mind to build my own engine (borrowing a couple of snippets here and there for MIME decoding), but I don’t think is that a rare problem so there is no implementation.
| [
"Both Jotspot and MediaWiki allow you to do this. The latter has support for a lot of plugins, of which this is one. The format is essentially PageTitle@something. Jotspot is a hosted solution where you get your own email address, MediaWiki is self-hosted and you give it a mailbox to monitor for incoming.\nArticles are appended to pages if they already exist, or a new page is created if it does not. This does require a degree of discipline for naming conventions, but is great for CC'ing.\nWe use MediaWiki here and I like it a lot. It has the same flaws as many other Wiki packages (e.g difficult to reorganize without orphaning pages) but is as good if not better than other Wiki packages I've used.\n",
"I don't know if this is exactly what you're looking for, but I know many of 37 Signals' products support adding data through email. I use Highrise to keep track of some of my business correspondence, and I'm able to CC or forward emails to Highrise and they get added to the appropriate contact.\n"
] | [
3,
0
] | [] | [] | [
"email",
"mime",
"wiki",
"wiki_engine"
] | stackoverflow_0000011612_email_mime_wiki_wiki_engine.txt |
Q:
What are the advantages of using a single database for EACH client?
In a database-centric application that is designed for multiple clients, I've always thought it was "better" to use a single database for ALL clients - associating records with proper indexes and keys. In listening to the Stack Overflow podcast, I heard Joel mention that FogBugz uses one database per client (so if there were 1000 clients, there would be 1000 databases). What are the advantages of using this architecture?
I understand that for some projects, clients need direct access to all of their data - in such an application, it's obvious that each client needs their own database. However, for projects where a client does not need to access the database directly, are there any advantages to using one database per client? It seems that in terms of flexibility, it's much simpler to use a single database with a single copy of the tables. It's easier to add new features, it's easier to create reports, and it's just easier to manage.
I was pretty confident in the "one database for all clients" method until I heard Joel (an experienced developer) mention that his software uses a different approach -- and I'm a little confused with his decision...
I've heard people cite that databases slow down with a large number of records, but any relational database with some merit isn't going to have that problem - especially if proper indexes and keys are used.
Any input is greatly appreciated!
A:
Assume there's no scaling penalty for storing all the clients in one database; for most people, and well configured databases/queries, this will be fairly true these days. If you're not one of these people, well, then the benefit of a single database is obvious.
In this situation, benefits come from the encapsulation of each client. From the code perspective, each client exists in isolation - there is no possible situation in which a database update might overwrite, corrupt, retrieve or alter data belonging to another client. This also simplifies the model, as you don't need to ever consider the fact that records might belong to another client.
You also get benefits of separability - it's trivial to pull out the data associated with a given client ,and move them to a different server. Or restore a backup of that client when the call up to say "We've deleted some key data!", using the builtin database mechanisms.
You get easy and free server mobility - if you outscale one database server, you can just host new clients on another server. If they were all in one database, you'd need to either get beefier hardware, or run the database over multiple machines.
You get easy versioning - if one client wants to stay on software version 1.0, and another wants 2.0, where 1.0 and 2.0 use different database schemas, there's no problem - you can migrate one without having to pull them out of one database.
I can think of a few dozen more, I guess. But all in all, the key concept is "simplicity". The product manages one client, and thus one database. There is never any complexity from the "But the database also contains other clients" issue. It fits the mental model of the user, where they exist alone. Advantages like being able to doing easy reporting on all clients at once, are minimal - how often do you want a report on the whole world, rather than just one client?
A:
Here's one approach that I've seen before:
Each customer has a unique connection string stored in a master customer database.
The database is designed so that everything is segmented by CustomerID, even if there is a single customer on a database.
Scripts are created to migrate all customer data to a new database if needed, and then only that customer's connection string needs to be updated to point to the new location.
This allows for using a single database at first, and then easily segmenting later on once you've got a large number of clients, or more commonly when you have a couple of customers that overuse the system.
I've found that restoring specific customer data is really tough when all the data is in the same database, but managing upgrades is much simpler.
When using a single database per customer, you run into a huge problem of keeping all customers running at the same schema version, and that doesn't even consider backup jobs on a whole bunch of customer-specific databases. Naturally restoring data is easier, but if you make sure not to permanently delete records (just mark with a deleted flag or move to an archive table), then you have less need for database restore in the first place.
A:
To keep it simple. You can be sure that your client is only seeing their data. The client with fewer records doesn't have to pay the penalty of having to compete with hundreds of thousands of records that may be in the database but not theirs. I don't care how well everything is indexed and optimized there will be queries that determine that they have to scan every record.
A:
Well, what if one of your clients tells you to restore to an earlier version of their data due to some botched import job or similar? Imagine how your clients would feel if you told them "you can't do that, since your data is shared between all our clients" or "Sorry, but your changes were lost because client X demanded a restore of the database".
A:
As for the pain of upgrading 1000 database servers at once, some fairly simple automation should take care of that. As long as each database maintains an identical schema, then it won't really be an issue. We also use the database per client approach, and it works well for us.
Here is an article on this exact topic (yes, it is MSDN, but it is a technology independent article): http://msdn.microsoft.com/en-us/library/aa479086.aspx.
Another discussion of multi-tenancy as it relates to your data model here: http://www.ayende.com/Blog/archive/2008/08/07/Multi-Tenancy--The-Physical-Data-Model.aspx
A:
Scalability. Security. Our company uses 1 DB per customer approach as well. It also makes code a bit easier to maintain as well.
A:
In regulated industries such as health care it may be a requirement of one database per customer, possibly even a separate database server.
The simple answer to updating multiple databases when you upgrade is to do the upgrade as a transaction, and take a snapshot before upgrading if necessary. If you are running your operations well then you should be able to apply the upgrade to any number of databases.
Clustering is not really a solution to the problem of indices and full table scans. If you move to a cluster, very little changes. If you have have many smaller databases to distribute over multiple machines you can do this more cheaply without a cluster. Reliability and availability are considerations but can be dealt with in other ways (some people will still need a cluster but majority probably don't).
I'd be interested in hearing a little more context from you on this because clustering is not a simple topic and is expensive to implement in the RDBMS world. There is a lot of talk/bravado about clustering in the non-relational world Google Bigtable etc. but they are solving a different set of problems, and lose some of the useful features from an RDBMS.
A:
There are a couple of meanings of "database"
the hardware box
the running software (e.g. "the oracle")
the particular set of data files
the particular login or schema
It's likely Joel means one of the lower layers. In this case, it's just a matter of software configuration management... you don't have to patch 1000 software servers to fix a security bug, for example.
I think it's a good idea, so that a software bug doesn't leak information across clients. Imagine the case with an errant where clause that showed me your customer data as well as my own.
| What are the advantages of using a single database for EACH client? | In a database-centric application that is designed for multiple clients, I've always thought it was "better" to use a single database for ALL clients - associating records with proper indexes and keys. In listening to the Stack Overflow podcast, I heard Joel mention that FogBugz uses one database per client (so if there were 1000 clients, there would be 1000 databases). What are the advantages of using this architecture?
I understand that for some projects, clients need direct access to all of their data - in such an application, it's obvious that each client needs their own database. However, for projects where a client does not need to access the database directly, are there any advantages to using one database per client? It seems that in terms of flexibility, it's much simpler to use a single database with a single copy of the tables. It's easier to add new features, it's easier to create reports, and it's just easier to manage.
I was pretty confident in the "one database for all clients" method until I heard Joel (an experienced developer) mention that his software uses a different approach -- and I'm a little confused with his decision...
I've heard people cite that databases slow down with a large number of records, but any relational database with some merit isn't going to have that problem - especially if proper indexes and keys are used.
Any input is greatly appreciated!
| [
"Assume there's no scaling penalty for storing all the clients in one database; for most people, and well configured databases/queries, this will be fairly true these days. If you're not one of these people, well, then the benefit of a single database is obvious.\nIn this situation, benefits come from the encapsulation of each client. From the code perspective, each client exists in isolation - there is no possible situation in which a database update might overwrite, corrupt, retrieve or alter data belonging to another client. This also simplifies the model, as you don't need to ever consider the fact that records might belong to another client.\nYou also get benefits of separability - it's trivial to pull out the data associated with a given client ,and move them to a different server. Or restore a backup of that client when the call up to say \"We've deleted some key data!\", using the builtin database mechanisms.\nYou get easy and free server mobility - if you outscale one database server, you can just host new clients on another server. If they were all in one database, you'd need to either get beefier hardware, or run the database over multiple machines.\nYou get easy versioning - if one client wants to stay on software version 1.0, and another wants 2.0, where 1.0 and 2.0 use different database schemas, there's no problem - you can migrate one without having to pull them out of one database.\nI can think of a few dozen more, I guess. But all in all, the key concept is \"simplicity\". The product manages one client, and thus one database. There is never any complexity from the \"But the database also contains other clients\" issue. It fits the mental model of the user, where they exist alone. Advantages like being able to doing easy reporting on all clients at once, are minimal - how often do you want a report on the whole world, rather than just one client?\n",
"Here's one approach that I've seen before:\n\nEach customer has a unique connection string stored in a master customer database.\nThe database is designed so that everything is segmented by CustomerID, even if there is a single customer on a database.\nScripts are created to migrate all customer data to a new database if needed, and then only that customer's connection string needs to be updated to point to the new location.\n\nThis allows for using a single database at first, and then easily segmenting later on once you've got a large number of clients, or more commonly when you have a couple of customers that overuse the system.\nI've found that restoring specific customer data is really tough when all the data is in the same database, but managing upgrades is much simpler.\nWhen using a single database per customer, you run into a huge problem of keeping all customers running at the same schema version, and that doesn't even consider backup jobs on a whole bunch of customer-specific databases. Naturally restoring data is easier, but if you make sure not to permanently delete records (just mark with a deleted flag or move to an archive table), then you have less need for database restore in the first place.\n",
"To keep it simple. You can be sure that your client is only seeing their data. The client with fewer records doesn't have to pay the penalty of having to compete with hundreds of thousands of records that may be in the database but not theirs. I don't care how well everything is indexed and optimized there will be queries that determine that they have to scan every record.\n",
"Well, what if one of your clients tells you to restore to an earlier version of their data due to some botched import job or similar? Imagine how your clients would feel if you told them \"you can't do that, since your data is shared between all our clients\" or \"Sorry, but your changes were lost because client X demanded a restore of the database\".\n",
"As for the pain of upgrading 1000 database servers at once, some fairly simple automation should take care of that. As long as each database maintains an identical schema, then it won't really be an issue. We also use the database per client approach, and it works well for us.\nHere is an article on this exact topic (yes, it is MSDN, but it is a technology independent article): http://msdn.microsoft.com/en-us/library/aa479086.aspx.\nAnother discussion of multi-tenancy as it relates to your data model here: http://www.ayende.com/Blog/archive/2008/08/07/Multi-Tenancy--The-Physical-Data-Model.aspx\n",
"Scalability. Security. Our company uses 1 DB per customer approach as well. It also makes code a bit easier to maintain as well.\n",
"In regulated industries such as health care it may be a requirement of one database per customer, possibly even a separate database server.\nThe simple answer to updating multiple databases when you upgrade is to do the upgrade as a transaction, and take a snapshot before upgrading if necessary. If you are running your operations well then you should be able to apply the upgrade to any number of databases.\nClustering is not really a solution to the problem of indices and full table scans. If you move to a cluster, very little changes. If you have have many smaller databases to distribute over multiple machines you can do this more cheaply without a cluster. Reliability and availability are considerations but can be dealt with in other ways (some people will still need a cluster but majority probably don't).\nI'd be interested in hearing a little more context from you on this because clustering is not a simple topic and is expensive to implement in the RDBMS world. There is a lot of talk/bravado about clustering in the non-relational world Google Bigtable etc. but they are solving a different set of problems, and lose some of the useful features from an RDBMS.\n",
"There are a couple of meanings of \"database\"\n\nthe hardware box\nthe running software (e.g. \"the oracle\")\nthe particular set of data files\nthe particular login or schema\n\nIt's likely Joel means one of the lower layers. In this case, it's just a matter of software configuration management... you don't have to patch 1000 software servers to fix a security bug, for example.\nI think it's a good idea, so that a software bug doesn't leak information across clients. Imagine the case with an errant where clause that showed me your customer data as well as my own.\n"
] | [
51,
16,
12,
10,
10,
6,
3,
0
] | [] | [] | [
"database",
"database_design",
"multi_tenant"
] | stackoverflow_0000013348_database_database_design_multi_tenant.txt |
Q:
Should I support ASP.NET 1.1?
I've just started working on an ASP.NET project which I hope to open source once it gets to a suitable stage. It's basically going to be a library that can be used by existing websites. My preference is to support ASP.NET 2.0 through 3.5, but I wondered how many people I would be leaving out by not supporting ASP.NET 1.1? More specifically, how many people are there still using ASP.NET 1.1 for whom ASP.NET 2.0/3.5 is not an option? If upgrading your server is not an option for you, why not?
A:
Increasingly I think not.
The kind of large rigid organisation currently still clinging to 1.1 (probably because they're only just upgraded to it) is also the kind that's highly unlikely to look at open source solutions.
If I were starting a new ASP.Net project right now I'd stick with .Net 3.5 and probably the new MVC previews.
A:
Remember that .NET 1.1 is going out of general support in October of this year (and that includes ASP.NET 1.1).
A:
I think you would be perfectly fine with targeting just 2.0 and above, someone who would use your library would most likely be doing new development and using at least ASP.NET 2.0. I think it would be a very small group of people doing new development in 1.1.
| Should I support ASP.NET 1.1? | I've just started working on an ASP.NET project which I hope to open source once it gets to a suitable stage. It's basically going to be a library that can be used by existing websites. My preference is to support ASP.NET 2.0 through 3.5, but I wondered how many people I would be leaving out by not supporting ASP.NET 1.1? More specifically, how many people are there still using ASP.NET 1.1 for whom ASP.NET 2.0/3.5 is not an option? If upgrading your server is not an option for you, why not?
| [
"Increasingly I think not.\nThe kind of large rigid organisation currently still clinging to 1.1 (probably because they're only just upgraded to it) is also the kind that's highly unlikely to look at open source solutions.\nIf I were starting a new ASP.Net project right now I'd stick with .Net 3.5 and probably the new MVC previews.\n",
"Remember that .NET 1.1 is going out of general support in October of this year (and that includes ASP.NET 1.1).\n",
"I think you would be perfectly fine with targeting just 2.0 and above, someone who would use your library would most likely be doing new development and using at least ASP.NET 2.0. I think it would be a very small group of people doing new development in 1.1.\n"
] | [
4,
2,
1
] | [] | [] | [
".net_1.1",
"asp.net"
] | stackoverflow_0000013106_.net_1.1_asp.net.txt |
Q:
The theory (and terminology) behind Source Control
I've tried using source control for a couple projects but still don't really understand it. For these projects, we've used TortoiseSVN and have only had one line of revisions. (No trunk, branch, or any of that.) If there is a recommended way to set up source control systems, what are they? What are the reasons and benifits for setting it up that way? What is the underlying differences between the workings of a centralized and distributed source control system?
A:
Think of source control as a giant "Undo" button for your source code. Every time you check in, you're adding a point to which you can roll back. Even if you don't use branching/merging, this feature alone can be very valuable.
Additionally, by having one 'authoritative' version of the source control, it becomes much easier to back up.
Centralized vs. distributed... the difference is really that in distributed, there isn't necessarily one 'authoritative' version of the source control, although in practice people usually still do have the master tree.
The big advantage to distributed source control is two-fold:
When you use distributed source control, you have the whole source tree on your local machine. You can commit, create branches, and work pretty much as though you were all alone, and then when you're ready to push up your changes, you can promote them from your machine to the master copy. If you're working "offline" a lot, this can be a huge benefit.
You don't have to ask anybody's permission to become a distributor of the source control. If person A is running the project, but person B and C want to make changes, and share those changes with each other, it becomes much easier with distributed source control.
A:
I recommend checking out the following from Eric Sink:
http://www.ericsink.com/scm/source_control.html
Having some sort of revision control system in place is probably the most important tool a programmer has for reviewing code changes and understanding who did what to whom. Even for single person projects, it is invaluable to be able to diff current code against previous known working version to understand what might have gone wrong due to a change.
A:
Here are two articles that are very helpful for understanding the basics. Beyond being informative, Sink's company sells a great source control product called Vault that is free for single users (I am not affiliated in any way with that company).
http://www.ericsink.com/scm/source_control.html
http://betterexplained.com/articles/a-visual-guide-to-version-control/
Vault info at www.vault.com.
A:
Even if you don't branch, you may find it useful to use tags to mark releases.
Imagine that you rolled out a new version of your software yesterday and have started making major changes for the next version. A user calls you to report a serious bug in yesterday's release. You can't just fix it and copy over the changes from your development trunk because the changes you've just made the whole thing unstable.
If you had tagged the release, you could check out a working copy of it and use it to fix the bug.
Then, you might choose to create a branch at the tag and check the bug fix into it. That way, you can fix more bugs on that release while you continue to upgrade the trunk. You can also merge those fixes into the trunk so that they'll be present in the next release.
A:
The common standard for setting up Subversion is to have three folders under the root of your repository: trunk, branches and tags. The trunk folder holds your current "main" line of development. For many shops and situations, this is all they ever use... just a single working repository of code.
The tags folder takes it one step further and allows you to "checkpoint" your code at certain points in time. For example, when you release a new build or sometimes even when you simply make a new build, you "tag" a copy into this folder. This just allows you to know exactly what your code looked like at that point in time.
The branches folder holds different kinds of branches that you might need in special situations. Sometimes a branch is a place to work on experimental feature or features that might take a long time to get stable (therefore you don't want to introduce them into your main line just yet). Other times, a branch might represent the "production" copy of your code which can be edited and deployed independently from your main line of code which contains changes intended for a future release.
Anyway, this is just one aspect of how to set up your system, but I think giving some thought to this structure is important.
| The theory (and terminology) behind Source Control | I've tried using source control for a couple projects but still don't really understand it. For these projects, we've used TortoiseSVN and have only had one line of revisions. (No trunk, branch, or any of that.) If there is a recommended way to set up source control systems, what are they? What are the reasons and benifits for setting it up that way? What is the underlying differences between the workings of a centralized and distributed source control system?
| [
"Think of source control as a giant \"Undo\" button for your source code. Every time you check in, you're adding a point to which you can roll back. Even if you don't use branching/merging, this feature alone can be very valuable.\nAdditionally, by having one 'authoritative' version of the source control, it becomes much easier to back up.\nCentralized vs. distributed... the difference is really that in distributed, there isn't necessarily one 'authoritative' version of the source control, although in practice people usually still do have the master tree.\nThe big advantage to distributed source control is two-fold:\n\nWhen you use distributed source control, you have the whole source tree on your local machine. You can commit, create branches, and work pretty much as though you were all alone, and then when you're ready to push up your changes, you can promote them from your machine to the master copy. If you're working \"offline\" a lot, this can be a huge benefit.\nYou don't have to ask anybody's permission to become a distributor of the source control. If person A is running the project, but person B and C want to make changes, and share those changes with each other, it becomes much easier with distributed source control.\n\n",
"I recommend checking out the following from Eric Sink:\nhttp://www.ericsink.com/scm/source_control.html\nHaving some sort of revision control system in place is probably the most important tool a programmer has for reviewing code changes and understanding who did what to whom. Even for single person projects, it is invaluable to be able to diff current code against previous known working version to understand what might have gone wrong due to a change.\n",
"Here are two articles that are very helpful for understanding the basics. Beyond being informative, Sink's company sells a great source control product called Vault that is free for single users (I am not affiliated in any way with that company). \nhttp://www.ericsink.com/scm/source_control.html\nhttp://betterexplained.com/articles/a-visual-guide-to-version-control/\nVault info at www.vault.com.\n",
"Even if you don't branch, you may find it useful to use tags to mark releases.\nImagine that you rolled out a new version of your software yesterday and have started making major changes for the next version. A user calls you to report a serious bug in yesterday's release. You can't just fix it and copy over the changes from your development trunk because the changes you've just made the whole thing unstable.\nIf you had tagged the release, you could check out a working copy of it and use it to fix the bug.\nThen, you might choose to create a branch at the tag and check the bug fix into it. That way, you can fix more bugs on that release while you continue to upgrade the trunk. You can also merge those fixes into the trunk so that they'll be present in the next release.\n",
"The common standard for setting up Subversion is to have three folders under the root of your repository: trunk, branches and tags. The trunk folder holds your current \"main\" line of development. For many shops and situations, this is all they ever use... just a single working repository of code.\nThe tags folder takes it one step further and allows you to \"checkpoint\" your code at certain points in time. For example, when you release a new build or sometimes even when you simply make a new build, you \"tag\" a copy into this folder. This just allows you to know exactly what your code looked like at that point in time.\nThe branches folder holds different kinds of branches that you might need in special situations. Sometimes a branch is a place to work on experimental feature or features that might take a long time to get stable (therefore you don't want to introduce them into your main line just yet). Other times, a branch might represent the \"production\" copy of your code which can be edited and deployed independently from your main line of code which contains changes intended for a future release.\nAnyway, this is just one aspect of how to set up your system, but I think giving some thought to this structure is important.\n"
] | [
8,
6,
3,
1,
1
] | [] | [] | [
"version_control"
] | stackoverflow_0000013460_version_control.txt |
Q:
Purpose of {1} in this regular expression to match url protocols
I was reading this question about how to parse URLs out of web pages and had a question about the accepted answer which offered this solution:
((mailto\:|(news|(ht|f)tp(s?))\://){1}\S+)
The solution was offered by csmba and he credited it to regexlib.com. Whew. Credits done.
I think this is a fairly naive regular expression but it's a fine starting point for building something better. But, my question is this:
What is the point of {1}? It means "exactly one of the previous grouping", right? Isn't that the default behavior of a grouping in a regular expression? Would the expression be changed in any way if the {1} were removed?
If I saw this from a coworker I would point out his or her error but as I write this the response is rated at a 6 and the expression on regexlib.com is rated a 4 of 5. So maybe I'm missing something?
A:
@Rob: I disagree. To enforce what you are asking for I think you would need to use negative-look-behind, which is possible but is certainly not related to use {1}. Neither version of the regexp address that particular issue.
To let the code speak:
tibook 0 /home/jj33/swap > cat text
Text this is http://example.com text this is
Text this is http://http://example.com text this is
tibook 0 /home/jj33/swap > cat p
#!/usr/bin/perl
my $re1 = '((mailto\:|(news|(ht|f)tp(s?))\://){1}\S+)';
my $re2 = '((mailto\:|(news|(ht|f)tp(s?))\://)\S+)';
while (<>) {
print "Evaluating: $_";
print "re1 saw \$1 = $1\n" if (/$re1/);
print "re2 saw \$1 = $1\n" if (/$re2/);
}
tibook 0 /home/jj33/swap > cat text | perl p
Evaluating: Text this is http://example.com text this is
re1 saw $1 = http://example.com
re2 saw $1 = http://example.com
Evaluating: Text this is http://http://example.com text this is
re1 saw $1 = http://http://example.com
re2 saw $1 = http://http://example.com
tibook 0 /home/jj33/swap >
So, if there is a difference between the two versions, it's doesn't seem to be the one you suggest.
A:
I don't think the {1} has any valid function in that regex.
(**mailto:|(news|(ht|f)tp(s?))://){1}**
You should read this as: "capture the stuff in the parens exactly one time". But we don't really care about capturing this for use later, eg $1 in the replacement. So it's pointless.
A:
I don't think it has any purpose. But because RegEx is almost impossible to understand/decompose, people rarely point out errors. That is probably why no one else pointed it out.
A:
@Jeff Atwood, your interpretation is a little off - the {1} means match exactly once, but has no effect on the "capturing" - the capturing occurs because of the parens - the braces only specify the number of times the pattern must match the source - once, as you say.
I agree with @Marius, even if his answer is a little terse and may come off as being flippant. Regular expressions are tough, if one's not used to using them, and the {1} in the question isn't quite error - in systems that support it, it does mean "exactly one match". In this sense, it doesn't really do anything.
Unfortunately, contrary to a now-deleted post, it doesn't keep the regexp from matching http://http://example.org, since the \S+ at the end will match one or more non-whitespace characters, including the http://example.org in http://http://example.org (verified using Python 2.5, just in case my regexp reading was off). So, the regexp given isn't really the best. I'm not a URL expert, but probably something limiting the appearance of ":"s and "//"s after the first one would be necessary (but hardly sufficient) to ensure good URLs.
| Purpose of {1} in this regular expression to match url protocols | I was reading this question about how to parse URLs out of web pages and had a question about the accepted answer which offered this solution:
((mailto\:|(news|(ht|f)tp(s?))\://){1}\S+)
The solution was offered by csmba and he credited it to regexlib.com. Whew. Credits done.
I think this is a fairly naive regular expression but it's a fine starting point for building something better. But, my question is this:
What is the point of {1}? It means "exactly one of the previous grouping", right? Isn't that the default behavior of a grouping in a regular expression? Would the expression be changed in any way if the {1} were removed?
If I saw this from a coworker I would point out his or her error but as I write this the response is rated at a 6 and the expression on regexlib.com is rated a 4 of 5. So maybe I'm missing something?
| [
"@Rob: I disagree. To enforce what you are asking for I think you would need to use negative-look-behind, which is possible but is certainly not related to use {1}. Neither version of the regexp address that particular issue.\nTo let the code speak:\ntibook 0 /home/jj33/swap > cat text\nText this is http://example.com text this is\nText this is http://http://example.com text this is\ntibook 0 /home/jj33/swap > cat p\n#!/usr/bin/perl\n\nmy $re1 = '((mailto\\:|(news|(ht|f)tp(s?))\\://){1}\\S+)';\nmy $re2 = '((mailto\\:|(news|(ht|f)tp(s?))\\://)\\S+)';\n\nwhile (<>) {\n print \"Evaluating: $_\";\n print \"re1 saw \\$1 = $1\\n\" if (/$re1/);\n print \"re2 saw \\$1 = $1\\n\" if (/$re2/);\n}\ntibook 0 /home/jj33/swap > cat text | perl p\nEvaluating: Text this is http://example.com text this is\nre1 saw $1 = http://example.com\nre2 saw $1 = http://example.com\nEvaluating: Text this is http://http://example.com text this is\nre1 saw $1 = http://http://example.com\nre2 saw $1 = http://http://example.com\ntibook 0 /home/jj33/swap >\n\nSo, if there is a difference between the two versions, it's doesn't seem to be the one you suggest.\n",
"I don't think the {1} has any valid function in that regex. \n\n(**mailto:|(news|(ht|f)tp(s?))://){1}**\n\nYou should read this as: \"capture the stuff in the parens exactly one time\". But we don't really care about capturing this for use later, eg $1 in the replacement. So it's pointless.\n",
"I don't think it has any purpose. But because RegEx is almost impossible to understand/decompose, people rarely point out errors. That is probably why no one else pointed it out. \n",
"@Jeff Atwood, your interpretation is a little off - the {1} means match exactly once, but has no effect on the \"capturing\" - the capturing occurs because of the parens - the braces only specify the number of times the pattern must match the source - once, as you say.\nI agree with @Marius, even if his answer is a little terse and may come off as being flippant. Regular expressions are tough, if one's not used to using them, and the {1} in the question isn't quite error - in systems that support it, it does mean \"exactly one match\". In this sense, it doesn't really do anything.\nUnfortunately, contrary to a now-deleted post, it doesn't keep the regexp from matching http://http://example.org, since the \\S+ at the end will match one or more non-whitespace characters, including the http://example.org in http://http://example.org (verified using Python 2.5, just in case my regexp reading was off). So, the regexp given isn't really the best. I'm not a URL expert, but probably something limiting the appearance of \":\"s and \"//\"s after the first one would be necessary (but hardly sufficient) to ensure good URLs.\n"
] | [
3,
2,
1,
1
] | [] | [] | [
"regex"
] | stackoverflow_0000013470_regex.txt |
Q:
When to use an extension method with lambda over LINQtoObjects to filter a collection?
I am prototyping some C# 3 collection filters and came across this.
I have a collection of products:
public class MyProduct
{
public string Name { get; set; }
public Double Price { get; set; }
public string Description { get; set; }
}
var MyProducts = new List<MyProduct>
{
new MyProduct
{
Name = "Surfboard",
Price = 144.99,
Description = "Most important thing you will ever own."
},
new MyProduct
{
Name = "Leash",
Price = 29.28,
Description = "Keep important things close to you."
}
,
new MyProduct
{
Name = "Sun Screen",
Price = 15.88,
Description = "1000 SPF! Who Could ask for more?"
}
};
Now if I use LINQ to filter it works as expected:
var d = (from mp in MyProducts
where mp.Price < 50d
select mp);
And if I use the Where extension method combined with a Lambda the filter works as well:
var f = MyProducts.Where(mp => mp.Price < 50d).ToList();
Question: What is the difference, and why use one over the other?
A:
LINQ turns into method calls like the code you have.
In other words, there should be no difference.
However, in your two pieces of code you are not calling .ToList in the first, so the first piece of code will produce an enumerable data source, but if you call .ToList on it, the two should be the same.
A:
As mentioned d will be IEnumerable<MyProduct> while f is List<MyProduct>
The conversion is done by the C# compiler
var d =
from mp in MyProducts
where mp.Price < 50d
select mp;
Is converted to (before compilation to IL and with generics expanded):
var d =
MyProducts.
Where<MyProduct>( mp => mp.Price < 50d ).
Select<MyProduct>( mp => mp );
//note that this last select is optimised out if it makes no change
Note that in this simple case it makes little difference. Where Linq becomes really valuable is in much more complicated loops.
For instance this statement could include group-bys, orders and a few let statements and still be readable in Linq format when the equivalent .Method().Method.Method() would get complicated.
A:
Other than the ToList difference, #2 is a lot more readable and natural IMO
A:
The syntax you are using for d will get transformed by the compiler into the same IL as the extension methods. The "SQL-like" syntax is supposed to be a more natural way to represent a LINQ expression (although I personally prefer the extension methods). As has already been pointed out, the first example will return an IEnumerable result while the second example will return a List result due to the call to ToList(). If you remove the ToList() call in the second example, they will both return the same result as Where returns an IEnumerable result.
| When to use an extension method with lambda over LINQtoObjects to filter a collection? | I am prototyping some C# 3 collection filters and came across this.
I have a collection of products:
public class MyProduct
{
public string Name { get; set; }
public Double Price { get; set; }
public string Description { get; set; }
}
var MyProducts = new List<MyProduct>
{
new MyProduct
{
Name = "Surfboard",
Price = 144.99,
Description = "Most important thing you will ever own."
},
new MyProduct
{
Name = "Leash",
Price = 29.28,
Description = "Keep important things close to you."
}
,
new MyProduct
{
Name = "Sun Screen",
Price = 15.88,
Description = "1000 SPF! Who Could ask for more?"
}
};
Now if I use LINQ to filter it works as expected:
var d = (from mp in MyProducts
where mp.Price < 50d
select mp);
And if I use the Where extension method combined with a Lambda the filter works as well:
var f = MyProducts.Where(mp => mp.Price < 50d).ToList();
Question: What is the difference, and why use one over the other?
| [
"LINQ turns into method calls like the code you have.\nIn other words, there should be no difference.\nHowever, in your two pieces of code you are not calling .ToList in the first, so the first piece of code will produce an enumerable data source, but if you call .ToList on it, the two should be the same.\n",
"As mentioned d will be IEnumerable<MyProduct> while f is List<MyProduct>\nThe conversion is done by the C# compiler\nvar d = \n from mp in MyProducts\n where mp.Price < 50d\n select mp;\n\nIs converted to (before compilation to IL and with generics expanded):\nvar d = \n MyProducts.\n Where<MyProduct>( mp => mp.Price < 50d ).\n Select<MyProduct>( mp => mp ); \n //note that this last select is optimised out if it makes no change\n\nNote that in this simple case it makes little difference. Where Linq becomes really valuable is in much more complicated loops. \nFor instance this statement could include group-bys, orders and a few let statements and still be readable in Linq format when the equivalent .Method().Method.Method() would get complicated.\n",
"Other than the ToList difference, #2 is a lot more readable and natural IMO\n",
"The syntax you are using for d will get transformed by the compiler into the same IL as the extension methods. The \"SQL-like\" syntax is supposed to be a more natural way to represent a LINQ expression (although I personally prefer the extension methods). As has already been pointed out, the first example will return an IEnumerable result while the second example will return a List result due to the call to ToList(). If you remove the ToList() call in the second example, they will both return the same result as Where returns an IEnumerable result.\n"
] | [
6,
4,
0,
0
] | [] | [] | [
".net_3.5",
"c#",
"lambda",
"linq"
] | stackoverflow_0000005194_.net_3.5_c#_lambda_linq.txt |
Q:
SQL query, count and group by
If I have data like this:
Key
Name
1
Dan
2
Tom
3
Jon
4
Tom
5
Sam
6
Dan
What is the SQL query to bring back the records where Name is repeated 2 or more times?
So the result I would want is
Tom
Dan
A:
Couldn't be simpler...
Select Name, Count(Name) As Count
From Table
Group By Name
Having Count(Name) > 1
Order By Count(Name) Desc
This could also be extended to delete duplicates:
Delete From Table
Where Key In (
Select Max(Key)
From Table
Group By Name
Having Count(Name) > 1
)
A:
select name from table group by name having count(name) > 1
A:
This could also be accomplished by joining the table with itself,
SELECT DISTINCT t1.name
FROM tbl t1
INNER JOIN tbl t2
ON t1.name = t2.name
WHERE t1.key != t2.key;
| SQL query, count and group by | If I have data like this:
Key
Name
1
Dan
2
Tom
3
Jon
4
Tom
5
Sam
6
Dan
What is the SQL query to bring back the records where Name is repeated 2 or more times?
So the result I would want is
Tom
Dan
| [
"Couldn't be simpler...\nSelect Name, Count(Name) As Count \n From Table\n Group By Name\n Having Count(Name) > 1\n Order By Count(Name) Desc\n\nThis could also be extended to delete duplicates:\nDelete From Table\nWhere Key In (\n Select Max(Key)\n From Table\n Group By Name\n Having Count(Name) > 1\n )\n\n",
"select name from table group by name having count(name) > 1\n\n",
"This could also be accomplished by joining the table with itself,\nSELECT DISTINCT t1.name\nFROM tbl t1\n INNER JOIN tbl t2\n ON t1.name = t2.name\nWHERE t1.key != t2.key;\n\n"
] | [
40,
4,
2
] | [] | [] | [
"sql"
] | stackoverflow_0000003196_sql.txt |
Q:
GridView delete not working
I'm using a GridView in C#.NET 3.5 and have just converted the underlying DataSource from Adapter model to an object which gets its data from LINQ to SQL - i.e. a Business object that returns a List<> for the GetData() function etc.
All was well in Denmark and the Update, and conditional Select statements work as expected but I can't get the Delete function to work. Just trying to pass in the ID or the entire object but it's being passed in a "new" object with none of the properties set. I'm just wondering if it's the old OldValuesParameterFormatString="original_{0}" monster in the ObjectDataSource causing confusion again.
Anybody have any ideas?
A:
I found the solution. I had to set the GridView's DataKeyNames property to the unique key that my data was returning (in this case a classically named ID field). I'm guessing that this property "unset" itself when the Grid refreshed.
| GridView delete not working | I'm using a GridView in C#.NET 3.5 and have just converted the underlying DataSource from Adapter model to an object which gets its data from LINQ to SQL - i.e. a Business object that returns a List<> for the GetData() function etc.
All was well in Denmark and the Update, and conditional Select statements work as expected but I can't get the Delete function to work. Just trying to pass in the ID or the entire object but it's being passed in a "new" object with none of the properties set. I'm just wondering if it's the old OldValuesParameterFormatString="original_{0}" monster in the ObjectDataSource causing confusion again.
Anybody have any ideas?
| [
"I found the solution. I had to set the GridView's DataKeyNames property to the unique key that my data was returning (in this case a classically named ID field). I'm guessing that this property \"unset\" itself when the Grid refreshed.\n"
] | [
6
] | [] | [] | [
"asp.net",
"c#",
"gridview"
] | stackoverflow_0000013524_asp.net_c#_gridview.txt |
Q:
Browser Sync across many machines
Everyone remembers google browser sync right? I thought it was great. Unfortunately Google decided not to upgrade the service to Firefox 3.0. Mozilla is developing a replacement for google browser sync which will be a part of the Weave project. I have tried using Weave and found it to be very very slow or totally inoperable. Granted they are in a early development phase right now so I can not really complain.
This specific problem of browser sync got me to thinking though. What do all of you think of Mozilla or someone making a server/client package that we, the users, could run on your 'main' machine? Now you just have to know your own IP or have some way to announce it to your client browsers at work or wherever.
There are several problems I can think of with this: non static IPs, Opening up ports on your local comp etc. It just seems that Mozilla does not want to handle this traffic created by many people syncing their browsers. There is not a way for them to monetize this traffic since all the data uploaded must be encrypted.
A:
Mozilla Weave is capable of running on personal servers. It uses WebDAV to communicate with HTTP servers and can be configured to connect to private servers. I've tried setting it up on my own servers but with no success (Mainly because I'm not very good at working with Apache to configure WebDAV)
I'm hoping Mozilla Weave eventually allows FTP access so I can easily use my server to host my firefox profile.
If you're interested in trying Mozilla Weave on a personal server, there's a tutorial here:
http://marios.tziortzis.com/page/blog/article/setting-up-mozilla-weave-on-your-server/
A:
Browser Sync is up on Google Code now. Doesn't look like anything has been done with it yet though, as far as making it hosted on personal servers/computers.
A:
I've been using the Firefox Scrapbook extension, sync'd via FolderShare. It takes a little setup, but the nice thing is that Scrapbook grabs a local copy of each page so it works offline or if the site goes away.
A:
Not a complete solution to this problem, but I've found FoxMarks to be a really nice bookmark syncing extension.
| Browser Sync across many machines | Everyone remembers google browser sync right? I thought it was great. Unfortunately Google decided not to upgrade the service to Firefox 3.0. Mozilla is developing a replacement for google browser sync which will be a part of the Weave project. I have tried using Weave and found it to be very very slow or totally inoperable. Granted they are in a early development phase right now so I can not really complain.
This specific problem of browser sync got me to thinking though. What do all of you think of Mozilla or someone making a server/client package that we, the users, could run on your 'main' machine? Now you just have to know your own IP or have some way to announce it to your client browsers at work or wherever.
There are several problems I can think of with this: non static IPs, Opening up ports on your local comp etc. It just seems that Mozilla does not want to handle this traffic created by many people syncing their browsers. There is not a way for them to monetize this traffic since all the data uploaded must be encrypted.
| [
"Mozilla Weave is capable of running on personal servers. It uses WebDAV to communicate with HTTP servers and can be configured to connect to private servers. I've tried setting it up on my own servers but with no success (Mainly because I'm not very good at working with Apache to configure WebDAV)\nI'm hoping Mozilla Weave eventually allows FTP access so I can easily use my server to host my firefox profile.\nIf you're interested in trying Mozilla Weave on a personal server, there's a tutorial here:\nhttp://marios.tziortzis.com/page/blog/article/setting-up-mozilla-weave-on-your-server/\n",
"Browser Sync is up on Google Code now. Doesn't look like anything has been done with it yet though, as far as making it hosted on personal servers/computers.\n",
"I've been using the Firefox Scrapbook extension, sync'd via FolderShare. It takes a little setup, but the nice thing is that Scrapbook grabs a local copy of each page so it works offline or if the site goes away.\n",
"Not a complete solution to this problem, but I've found FoxMarks to be a really nice bookmark syncing extension.\n"
] | [
7,
1,
0,
0
] | [] | [] | [
"browser",
"firefox",
"synchronization"
] | stackoverflow_0000013518_browser_firefox_synchronization.txt |
Q:
user controls and asp.net mvc
Here is one trivial question, that I am not sure how to handle.
I need to display list of categories on every page, and to be able to choose items from a specific category to be displayed. I use asp.net MVC, and have chosen to create a user control that will display categories. My question is: what is the best approach to pass data to a user control. I already found some information in these blog posts:
http://weblogs.asp.net/stephenwalther/archive/2008/08/12/asp-net-mvc-tip-31-passing-data-to-master-pages-and-user-controls.aspx
http://blog.matthidinger.com/2008/02/21/ASPNETMVCUserControlsStartToFinish.aspx
I would like also to hear your opinion.
PS. I'd like to hear Jeff's opinion, especially because of his experience with UC's on Stackoverflow
A:
I'm using mvc components, which replaced ascx user controls in preview 4.
Example:
http://blog.wekeroad.com/blog/asp-net-mvc-preview-4-componentcontroller-is-now-renderaction/
So, you call components action from View, which then choose View to render. You can pass data in this call also.
A:
it is the mvc futures project. i will probably try this
http://forums.asp.net/t/1303328.aspx. I need to render menu with categories.
| user controls and asp.net mvc | Here is one trivial question, that I am not sure how to handle.
I need to display list of categories on every page, and to be able to choose items from a specific category to be displayed. I use asp.net MVC, and have chosen to create a user control that will display categories. My question is: what is the best approach to pass data to a user control. I already found some information in these blog posts:
http://weblogs.asp.net/stephenwalther/archive/2008/08/12/asp-net-mvc-tip-31-passing-data-to-master-pages-and-user-controls.aspx
http://blog.matthidinger.com/2008/02/21/ASPNETMVCUserControlsStartToFinish.aspx
I would like also to hear your opinion.
PS. I'd like to hear Jeff's opinion, especially because of his experience with UC's on Stackoverflow
| [
"I'm using mvc components, which replaced ascx user controls in preview 4.\nExample:\nhttp://blog.wekeroad.com/blog/asp-net-mvc-preview-4-componentcontroller-is-now-renderaction/\nSo, you call components action from View, which then choose View to render. You can pass data in this call also. \n",
"it is the mvc futures project. i will probably try this\nhttp://forums.asp.net/t/1303328.aspx. I need to render menu with categories.\n"
] | [
2,
1
] | [] | [] | [
"asp.net",
"asp.net_mvc",
"user_controls"
] | stackoverflow_0000013561_asp.net_asp.net_mvc_user_controls.txt |
Q:
Http Auth in a Firefox 3 bookmarklet
I'm trying to create a bookmarklet for posting del.icio.us bookmarks to a separate account.
I tested it from the command line like:
wget -O - --no-check-certificate \
"https://seconduser:thepassword@api.del.icio.us/v1/posts/add?url=http://seet.dk&description=test"
This works great.
I then wanted to create a bookmarklet in my firefox. I googled and found bits and pieces and ended up with:
javascript:void(
open('https://seconduser:password@api.del.icio.us/v1/posts/add?url='
+encodeURIComponent(location.href)
+'&description='+encodeURIComponent(document.title),
'delicious','toolbar=no,width=500,height=250'
)
);
But all that happens is that I get this from del.icio.us:
<?xml version="1.0" standalone="yes"?>
<result code="access denied" />
<!-- fe04.api.del.ac4.yahoo.net uncompressed/chunked Thu Aug 7 02:02:54 PDT 2008 -->
If I then go to the address bar and press enter, it changes to:
<?xml version='1.0' standalone='yes'?>
<result code="done" />
<!-- fe02.api.del.ac4.yahoo.net uncompressed/chunked Thu Aug 7 02:07:45 PDT 2008 -->
Any ideas how to get it to work directly from the bookmarks?
A:
Can you sniff the traffic to find what's actually being sent? Is it sending any auth data at all and it's incorrect or being presented in a form the server doesn't like, or is it never being sent by firefox at all?
A:
@travis Looks very nice! I will sure take a look into it. I can think of several places I can use that
I never got round to sniff the traffic but found out that a php site on my own server with http-auth worked fine, so i figured it was something with delicious. I then created a php page that does a wget of the delicious api and everything works fine :)
A:
Does calling the method twice work?
Seems to me that your authentication is being approved after the content arrives, so then a second attempt now works because you have the correct cookies.
A:
I'd recommend checking out the iMacros addon for Firefox. I use it to login to a local web server and after logging in, navigate directly to a certain page. The code I have looks like this, but it allows you to record your own macros:
VERSION BUILD=6000814 RECORDER=FX
TAB T=1
URL GOTO=http://10.20.2.4/login
TAG POS=1 TYPE=INPUT:TEXT FORM=NAME:introduce ATTR=NAME:initials CONTENT=username-goes-here
SET !ENCRYPTION NO
TAG POS=1 TYPE=INPUT:PASSWORD FORM=NAME:introduce ATTR=NAME:password CONTENT=password-goes-here
TAG POS=1 TYPE=INPUT:SUBMIT FORM=NAME:introduce ATTR=NAME:Submit&&VALUE:Go
URL GOTO=http://10.20.2.4/timecard
I middle click on it and it opens a new tab and runs the macro taking me directly to the page I want, logged in with the account I specified.
| Http Auth in a Firefox 3 bookmarklet | I'm trying to create a bookmarklet for posting del.icio.us bookmarks to a separate account.
I tested it from the command line like:
wget -O - --no-check-certificate \
"https://seconduser:thepassword@api.del.icio.us/v1/posts/add?url=http://seet.dk&description=test"
This works great.
I then wanted to create a bookmarklet in my firefox. I googled and found bits and pieces and ended up with:
javascript:void(
open('https://seconduser:password@api.del.icio.us/v1/posts/add?url='
+encodeURIComponent(location.href)
+'&description='+encodeURIComponent(document.title),
'delicious','toolbar=no,width=500,height=250'
)
);
But all that happens is that I get this from del.icio.us:
<?xml version="1.0" standalone="yes"?>
<result code="access denied" />
<!-- fe04.api.del.ac4.yahoo.net uncompressed/chunked Thu Aug 7 02:02:54 PDT 2008 -->
If I then go to the address bar and press enter, it changes to:
<?xml version='1.0' standalone='yes'?>
<result code="done" />
<!-- fe02.api.del.ac4.yahoo.net uncompressed/chunked Thu Aug 7 02:07:45 PDT 2008 -->
Any ideas how to get it to work directly from the bookmarks?
| [
"Can you sniff the traffic to find what's actually being sent? Is it sending any auth data at all and it's incorrect or being presented in a form the server doesn't like, or is it never being sent by firefox at all?\n",
"@travis Looks very nice! I will sure take a look into it. I can think of several places I can use that\nI never got round to sniff the traffic but found out that a php site on my own server with http-auth worked fine, so i figured it was something with delicious. I then created a php page that does a wget of the delicious api and everything works fine :)\n",
"Does calling the method twice work?\nSeems to me that your authentication is being approved after the content arrives, so then a second attempt now works because you have the correct cookies.\n",
"I'd recommend checking out the iMacros addon for Firefox. I use it to login to a local web server and after logging in, navigate directly to a certain page. The code I have looks like this, but it allows you to record your own macros:\nVERSION BUILD=6000814 RECORDER=FX\nTAB T=1\nURL GOTO=http://10.20.2.4/login\nTAG POS=1 TYPE=INPUT:TEXT FORM=NAME:introduce ATTR=NAME:initials CONTENT=username-goes-here\nSET !ENCRYPTION NO\nTAG POS=1 TYPE=INPUT:PASSWORD FORM=NAME:introduce ATTR=NAME:password CONTENT=password-goes-here\nTAG POS=1 TYPE=INPUT:SUBMIT FORM=NAME:introduce ATTR=NAME:Submit&&VALUE:Go\nURL GOTO=http://10.20.2.4/timecard\n\nI middle click on it and it opens a new tab and runs the macro taking me directly to the page I want, logged in with the account I specified.\n"
] | [
4,
2,
1,
1
] | [] | [] | [
"delicious_api",
"firefox",
"javascript"
] | stackoverflow_0000004544_delicious_api_firefox_javascript.txt |
Q:
Has anybody used Google Performance Tools?
Looking for feedback on :
http://code.google.com/p/google-perftools/wiki/GooglePerformanceTools
A:
There's a pretty good post that I read a while back that outlines some testing and analysis of GPT in a variety of scenarios
A:
I use GPT at work since 2007, and I totally satisfied. I use it to monitor and optimize a Linux network library, and I have obtained a significant results.
The main flaw of GPT is the lack of precision. Due to the design, you only got the main time consuming functions, but this is often what you need when you want to optimize a program.
To be more precise I advise to use other tools like gprof or IBM Quantify.
| Has anybody used Google Performance Tools? | Looking for feedback on :
http://code.google.com/p/google-perftools/wiki/GooglePerformanceTools
| [
"There's a pretty good post that I read a while back that outlines some testing and analysis of GPT in a variety of scenarios\n",
"I use GPT at work since 2007, and I totally satisfied. I use it to monitor and optimize a Linux network library, and I have obtained a significant results.\nThe main flaw of GPT is the lack of precision. Due to the design, you only got the main time consuming functions, but this is often what you need when you want to optimize a program. \nTo be more precise I advise to use other tools like gprof or IBM Quantify.\n"
] | [
4,
2
] | [] | [] | [
"google_perftools",
"malloc",
"multithreading",
"performance"
] | stackoverflow_0000006957_google_perftools_malloc_multithreading_performance.txt |
Q:
Programatic access to call stack in .net
How can I get programmatic access to the call stack?
A:
Try System.Diagnostics.StackTrace.
A:
You can use the StackTrace and StrackFrame classes in System.Diagnostics.
A:
The right way is to use the StackTrace and StackFrame classes. Throwing an exception just to get the stack trace is completely misusing exceptions.
| Programatic access to call stack in .net | How can I get programmatic access to the call stack?
| [
"Try System.Diagnostics.StackTrace.\n",
"You can use the StackTrace and StrackFrame classes in System.Diagnostics.\n",
"The right way is to use the StackTrace and StackFrame classes. Throwing an exception just to get the stack trace is completely misusing exceptions.\n"
] | [
22,
5,
0
] | [] | [] | [
".net",
"callstack",
"reflection"
] | stackoverflow_0000013434_.net_callstack_reflection.txt |
Q:
Asynchronous multi-direction server-client communication over the same open socket?
I have a client-server app where the client is on a Windows Mobile 6 device, written in C++ and the server is on full Windows and written in C#.
Originally, I only needed it to send messages from the client to the server, with the server only ever sending back an acknowledgement that it received the message. Now, I would like to update it so that the server can actually send a message to the client to request data. As I currently have it set up so the client is only in receive mode after it sends data to the server, this doesn't allow for the server to send a request at any time. I would have to wait for client data. My first thought would be to create another thread on the client with a separate open socket, listening for server requests...just like the server already has in respect the client. Is there a way, within the same thread and using the same socket, to all the server to send requests at any time?
Can you use something to the effect of WaitForMultipleObjects() and pass it a receive buffer and an event that tells it there is data to be sent?
A:
When I needed to write an application with a client-server model where the clients could leave and enter whenever they want, (I assume that's also the case for your application as you use mobile devices) I made sure that the clients send an online message to the server, indicating they were connected and ready to do whatever they needed doing.
at that time the server could send messages back to the client trough the same open connection.
Also, but I don't know if that is applicable for you, I had some sort of heartbeat the clients sent to the server, letting it know it was still online. That way the server knows when a client was forcibly disconnected from the network and it could mark that client back as offline.
A:
Using asynchronous communication is totally possible in single thread!
There is a common design pattern in network software development called the reactor pattern (look at this book). Some well known network library provides an implementation of this pattern (look at ACE).
Briefly, the reactor is an object, you register all your sockets inside, and you wait for something. If something happened (new data arrived, connection close...) the reactor will notify you. And of course, you can use only one socket to send and received data asynchronously.
A:
I'm not clear on whether or not you're wanting to add the asynchronous bits to the server in C# or the client in C++.
If you're talking about doing this in C++, desktop Windows platforms can do socket I/O asynchronously through the API's that use overlapped I/O. For sockets, WSASend, WSARecv both allow async I/O (read the documentation on their LPOVERLAPPED parameters, which you can populate with events that get set when the I/O completes).
I don't know if Windows Mobile platforms support these functions, so you might have to do some additional digging.
A:
Check out asio. It is a cross compatable c++ library for asyncronous IO. I am not sure if this would be useful for the server ( I have never tried to link a standard c++ DLL to a c# project) but for the client it would be useful.
We use it with our application, and it solved most of our IO concurrency problems.
| Asynchronous multi-direction server-client communication over the same open socket? | I have a client-server app where the client is on a Windows Mobile 6 device, written in C++ and the server is on full Windows and written in C#.
Originally, I only needed it to send messages from the client to the server, with the server only ever sending back an acknowledgement that it received the message. Now, I would like to update it so that the server can actually send a message to the client to request data. As I currently have it set up so the client is only in receive mode after it sends data to the server, this doesn't allow for the server to send a request at any time. I would have to wait for client data. My first thought would be to create another thread on the client with a separate open socket, listening for server requests...just like the server already has in respect the client. Is there a way, within the same thread and using the same socket, to all the server to send requests at any time?
Can you use something to the effect of WaitForMultipleObjects() and pass it a receive buffer and an event that tells it there is data to be sent?
| [
"When I needed to write an application with a client-server model where the clients could leave and enter whenever they want, (I assume that's also the case for your application as you use mobile devices) I made sure that the clients send an online message to the server, indicating they were connected and ready to do whatever they needed doing.\nat that time the server could send messages back to the client trough the same open connection.\nAlso, but I don't know if that is applicable for you, I had some sort of heartbeat the clients sent to the server, letting it know it was still online. That way the server knows when a client was forcibly disconnected from the network and it could mark that client back as offline.\n",
"Using asynchronous communication is totally possible in single thread! \nThere is a common design pattern in network software development called the reactor pattern (look at this book). Some well known network library provides an implementation of this pattern (look at ACE).\nBriefly, the reactor is an object, you register all your sockets inside, and you wait for something. If something happened (new data arrived, connection close...) the reactor will notify you. And of course, you can use only one socket to send and received data asynchronously.\n",
"I'm not clear on whether or not you're wanting to add the asynchronous bits to the server in C# or the client in C++.\nIf you're talking about doing this in C++, desktop Windows platforms can do socket I/O asynchronously through the API's that use overlapped I/O. For sockets, WSASend, WSARecv both allow async I/O (read the documentation on their LPOVERLAPPED parameters, which you can populate with events that get set when the I/O completes).\nI don't know if Windows Mobile platforms support these functions, so you might have to do some additional digging.\n",
"Check out asio. It is a cross compatable c++ library for asyncronous IO. I am not sure if this would be useful for the server ( I have never tried to link a standard c++ DLL to a c# project) but for the client it would be useful.\nWe use it with our application, and it solved most of our IO concurrency problems.\n"
] | [
9,
8,
4,
4
] | [] | [] | [
"c#",
"c++",
"sockets"
] | stackoverflow_0000001241_c#_c++_sockets.txt |
Q:
Using unhandled exceptions instead of Contains()?
Imagine an object you are working with has a collection of other objects associated with it, for example, the Controls collection on a WinForm. You want to check for a certain object in the collection, but the collection doesn't have a Contains() method. There are several ways of dealing with this.
Implement your own Contains() method by looping through all items in the collection to see if one of them is what you are looking for. This seems to be the "best practice" approach.
I recently came across some code where instead of a loop, there was an attempt to access the object inside a try statement, as follows:
try
{
Object aObject = myCollection[myObject];
}
catch(Exception e)
{
//if this is thrown, then the object doesn't exist in the collection
}
My question is how poor of a programming practice do you consider the second option be and why? How is the performance of it compared to a loop through the collection?
A:
The general rule of thumb is to avoid using exceptions for control flow unless the circumstances that will trigger the exception are "exceptional" -- e.g., extremely rare!
If this is something that will happen normally and regularly it definitely should not be handled as an exception.
Exceptions are very, very slow due to all the overhead involved, so there can be performance reasons as well, if it's happening often enough.
A:
I would have to say that this is pretty bad practice. Whilst some people might be happy to say that looping through the collection is less efficient to throwing an exception, there is an overhead to throwing an exception. I would also question why you are using a collection to access an item by key when you would be better suited to using a dictionary or hashtable.
My main problem with this code however, is that regardless of the type of exception thrown, you are always going to be left with the same result.
For example, an exception could be thrown because the object doesn't exist in the collection, or because the collection itself is null or because you can't cast myCollect[myObject] to aObject.
All of these exceptions will get handled in the same way, which may not be your intention.
These are a couple of nice articles on when and where it is usally considered acceptable to throw exceptions:
Foundations of Programming
Throwing exceptions in c#
I particularly like this quote from the second article:
It is important that exceptions are
thrown only when an unexpected or
invalid activity occurs that prevents
a method from completing its normal
function. Exception handling
introduces a small overhead and lowers
performance so should not be used for
normal program flow instead of
conditional processing. It can also be
difficult to maintain code that
misuses exception handling in this
way.
A:
If, while writing your code, you expect this object to be in the collection, and then during runtime you find that it isn't, I would call that an exceptional case, and it is proper to use an exception.
However, if you're simply testing for the existence of an object, and you find that it is not there, this is not exceptional. Using an exception in this case is not proper.
The analysis of the runtime performance depends on the actual collection being used, and the method if searching for it. That shouldn't matter though. Don't let the illusion of optimization fool you into writing confusing code.
A:
I would have to think about it more as to how much I like it... my gut instinct is, eh, not so much...
EDIT: Ryan Fox's comments on the exceptional case is perfect, I concur
As for performance, it depends on the indexer on the collection. C# lets you override the indexer operator, so if it is doing a for loop like the contains method you would write, then it will be just as slow (with maybe a few nanoseconds slower due to the try/catch... but nothing to worry about unless that code itself is within a huge loop).
If the indexer is O(1) (or even O(log(n))... or anything faster than O(n)), then the try/catch solution would be faster of course.
Also, I am assuming the indexer is throwing the exception, if it is returning null, you could of course just check for null and not use the try/catch.
A:
In general, using exception handling for program flow and logic is bad practice. I personally feel that the latter option is unacceptable use of exceptions. Given the features of languages commonly used these days (such as Linq and lambdas in C# for example) there's no reason not to write your own Contains() method.
As a final thought, these days most collections do have a contains method already. So I think for the most part this is a non-issue.
A:
Exceptions should be exceptional.
Something like 'The collection is missing because the database has fallen out from underneath it' is exceptional
Something like 'the key is not present' is normal behaviour for a dictionary.
For your specific example of a winforms Control collection, the Controls property has a ContainsKey method, which is what you're supposed to use.
There's no ContainsValue because when dealing with dictionaries/hashtables, there's no fast way short of iterating through the entire collection, of checking if something is present, so you're really discouraged from doing that.
As for WHY Exceptions should be exceptional, it's about 2 things
Indicating what your code is trying to do. You want to have your code match what it is trying to achieve, as closely as possible, so it is readable and maintainable. Exception handling adds a bunch of extra cruft which gets in the way of this purpose
Brevity of code. You want your code to do what it's doing in the most direct way, so it is readable and maintainable. Again, the cruft added by exception handling gets in the way of this.
A:
Take a look at this blog post from Krzystof: http://blogs.msdn.com/kcwalina/archive/2008/07/17/ExceptionalError.aspx
Exceptions should be used for communicating error conditions, but they shouldn't be used as control logic (especially when there are far simpler ways to determine a condition, such as Contains).
Part of the issue is that exceptions, while not expensive to throw are expensive to catch and all exceptions are caught at some point.
| Using unhandled exceptions instead of Contains()? | Imagine an object you are working with has a collection of other objects associated with it, for example, the Controls collection on a WinForm. You want to check for a certain object in the collection, but the collection doesn't have a Contains() method. There are several ways of dealing with this.
Implement your own Contains() method by looping through all items in the collection to see if one of them is what you are looking for. This seems to be the "best practice" approach.
I recently came across some code where instead of a loop, there was an attempt to access the object inside a try statement, as follows:
try
{
Object aObject = myCollection[myObject];
}
catch(Exception e)
{
//if this is thrown, then the object doesn't exist in the collection
}
My question is how poor of a programming practice do you consider the second option be and why? How is the performance of it compared to a loop through the collection?
| [
"The general rule of thumb is to avoid using exceptions for control flow unless the circumstances that will trigger the exception are \"exceptional\" -- e.g., extremely rare!\nIf this is something that will happen normally and regularly it definitely should not be handled as an exception.\nExceptions are very, very slow due to all the overhead involved, so there can be performance reasons as well, if it's happening often enough.\n",
"I would have to say that this is pretty bad practice. Whilst some people might be happy to say that looping through the collection is less efficient to throwing an exception, there is an overhead to throwing an exception. I would also question why you are using a collection to access an item by key when you would be better suited to using a dictionary or hashtable.\nMy main problem with this code however, is that regardless of the type of exception thrown, you are always going to be left with the same result.\nFor example, an exception could be thrown because the object doesn't exist in the collection, or because the collection itself is null or because you can't cast myCollect[myObject] to aObject.\nAll of these exceptions will get handled in the same way, which may not be your intention.\nThese are a couple of nice articles on when and where it is usally considered acceptable to throw exceptions:\n\nFoundations of Programming\nThrowing exceptions in c#\n\nI particularly like this quote from the second article:\n\nIt is important that exceptions are\n thrown only when an unexpected or\n invalid activity occurs that prevents\n a method from completing its normal\n function. Exception handling\n introduces a small overhead and lowers\n performance so should not be used for\n normal program flow instead of\n conditional processing. It can also be\n difficult to maintain code that\n misuses exception handling in this\n way.\n\n",
"If, while writing your code, you expect this object to be in the collection, and then during runtime you find that it isn't, I would call that an exceptional case, and it is proper to use an exception.\nHowever, if you're simply testing for the existence of an object, and you find that it is not there, this is not exceptional. Using an exception in this case is not proper.\nThe analysis of the runtime performance depends on the actual collection being used, and the method if searching for it. That shouldn't matter though. Don't let the illusion of optimization fool you into writing confusing code.\n",
"I would have to think about it more as to how much I like it... my gut instinct is, eh, not so much...\nEDIT: Ryan Fox's comments on the exceptional case is perfect, I concur\nAs for performance, it depends on the indexer on the collection. C# lets you override the indexer operator, so if it is doing a for loop like the contains method you would write, then it will be just as slow (with maybe a few nanoseconds slower due to the try/catch... but nothing to worry about unless that code itself is within a huge loop).\nIf the indexer is O(1) (or even O(log(n))... or anything faster than O(n)), then the try/catch solution would be faster of course.\nAlso, I am assuming the indexer is throwing the exception, if it is returning null, you could of course just check for null and not use the try/catch.\n",
"In general, using exception handling for program flow and logic is bad practice. I personally feel that the latter option is unacceptable use of exceptions. Given the features of languages commonly used these days (such as Linq and lambdas in C# for example) there's no reason not to write your own Contains() method.\nAs a final thought, these days most collections do have a contains method already. So I think for the most part this is a non-issue.\n",
"Exceptions should be exceptional.\nSomething like 'The collection is missing because the database has fallen out from underneath it' is exceptional\nSomething like 'the key is not present' is normal behaviour for a dictionary.\nFor your specific example of a winforms Control collection, the Controls property has a ContainsKey method, which is what you're supposed to use.\nThere's no ContainsValue because when dealing with dictionaries/hashtables, there's no fast way short of iterating through the entire collection, of checking if something is present, so you're really discouraged from doing that.\nAs for WHY Exceptions should be exceptional, it's about 2 things\n\nIndicating what your code is trying to do. You want to have your code match what it is trying to achieve, as closely as possible, so it is readable and maintainable. Exception handling adds a bunch of extra cruft which gets in the way of this purpose\nBrevity of code. You want your code to do what it's doing in the most direct way, so it is readable and maintainable. Again, the cruft added by exception handling gets in the way of this.\n\n",
"Take a look at this blog post from Krzystof: http://blogs.msdn.com/kcwalina/archive/2008/07/17/ExceptionalError.aspx\nExceptions should be used for communicating error conditions, but they shouldn't be used as control logic (especially when there are far simpler ways to determine a condition, such as Contains).\nPart of the issue is that exceptions, while not expensive to throw are expensive to catch and all exceptions are caught at some point.\n"
] | [
4,
3,
0,
0,
0,
0,
0
] | [
"The latter is an acceptable solution. Although I would definitely catch on the specific exception (ElementNotFound?) that the collection throws in that case.\nSpeedwise, it depends on the common case. If you're more likely to find the element than not, the exception solution will be faster. If you're more likely to fail, then it would depend on size of the collection and its iteration speed. Either way, you'd want to measure against normal use to see if this is actually a bottle neck before worrying about speed like this. Go for clarity first, and the latter solution is far more clear than the former.\n"
] | [
-2
] | [
".net",
"c#",
"error_handling"
] | stackoverflow_0000008348_.net_c#_error_handling.txt |
Q:
SVN merge merged extra stuff
I just did a merge using something like:
svn merge -r 67212:67213 https://my.svn.repository/trunk .
I only had 2 files, one of which is a simple ChangeLog. Rather than just merging my ChangeLog changes, it actually pulled mine plus some previous ones that were not in the destination ChangeLog. I noticed there was a conflict when I executed --dry-run, so I updated ChangeLog, and there was still a conflict (and I saw the conflict when I did the actual merge).
I then later diffed on the file I was merging from:
svn diff -r 67212:67213 ChangeLog
And I see just the changes I had made, so I know that extra changes didn't get in there somehow.
This makes me worried that merge is not actually just taking what I changed, which is what I would have expected. Can anybody explain what happened?
UPDATE: In response to NilObject:
So, I have 2 files changed, only ChangeLog is relevant, the other merged fine. When I go to my regular trunk checkout, I do the diff command above and see:
Index: ChangeLog
===================================================================
--- ChangeLog (revision 67212)
+++ ChangeLog (revision 67213)
@@ -1,3 +1,7 @@
+2008-08-06 Mike Stone <myemail>
+
+ * changed_file: Details.
+
2008-08-06 Someone Else <their_email>
* theirChanges: Details.
After my merge of the previous changes, the diff of ChangeLog looks like this:
Index: ChangeLog
===================================================================
--- ChangeLog (revision 67215)
+++ ChangeLog (working copy)
@@ -1,3 +1,14 @@
+<<<<<<< .working
+=======
+2008-08-06 Mike Stone <myemail>
+
+ * changed_file: Details.
+
+2008-08-06 Someone Else <their_email>
+
+ * theirChanges: Details.
+
+>>>>>>> .merge-right.r67213
2008-08-05 Someone Else2 <their2_email>
* olderChange: Details.
Note that the entry that was incorrectly pulled in was not in the file I am merging it to, but yet it was not one of my changes and shouldn't have been merged anyways. It was easy to fix (remove the extra lines that weren't part of my changes), but it still makes me worry about merging automatically in SVN.
A:
This only happens with conflicts - basically svn tried to merge the change in, but (roughly speaking) saw the change as:
Add
2008-08-06 Mike Stone <myemail>
* changed_file: Details.
before
2008-08-06 Someone Else <their_email>
And it couldn't find the Someone Else line while doing the merge, so chucked that bit in for context when putting in the conflict. If it was a non-conflicting merge only the changes you expected would have been applied.
A:
There's not really enough information to go on here.
svn merge -r 67212:67213 https://my.svn.repository/trunk .
will merge any files changed in the revision 67212 in the folder /trunk on the repository and merge them into your current working directory. If you do:
svn log -r 67212
What files does it show changed? Merge will only pull changes from the first argument, and apply them to the second. It does not upload back to the server in the first argument.
If this doesn't answer your question, could you post more details as to what exactly is happening?
| SVN merge merged extra stuff | I just did a merge using something like:
svn merge -r 67212:67213 https://my.svn.repository/trunk .
I only had 2 files, one of which is a simple ChangeLog. Rather than just merging my ChangeLog changes, it actually pulled mine plus some previous ones that were not in the destination ChangeLog. I noticed there was a conflict when I executed --dry-run, so I updated ChangeLog, and there was still a conflict (and I saw the conflict when I did the actual merge).
I then later diffed on the file I was merging from:
svn diff -r 67212:67213 ChangeLog
And I see just the changes I had made, so I know that extra changes didn't get in there somehow.
This makes me worried that merge is not actually just taking what I changed, which is what I would have expected. Can anybody explain what happened?
UPDATE: In response to NilObject:
So, I have 2 files changed, only ChangeLog is relevant, the other merged fine. When I go to my regular trunk checkout, I do the diff command above and see:
Index: ChangeLog
===================================================================
--- ChangeLog (revision 67212)
+++ ChangeLog (revision 67213)
@@ -1,3 +1,7 @@
+2008-08-06 Mike Stone <myemail>
+
+ * changed_file: Details.
+
2008-08-06 Someone Else <their_email>
* theirChanges: Details.
After my merge of the previous changes, the diff of ChangeLog looks like this:
Index: ChangeLog
===================================================================
--- ChangeLog (revision 67215)
+++ ChangeLog (working copy)
@@ -1,3 +1,14 @@
+<<<<<<< .working
+=======
+2008-08-06 Mike Stone <myemail>
+
+ * changed_file: Details.
+
+2008-08-06 Someone Else <their_email>
+
+ * theirChanges: Details.
+
+>>>>>>> .merge-right.r67213
2008-08-05 Someone Else2 <their2_email>
* olderChange: Details.
Note that the entry that was incorrectly pulled in was not in the file I am merging it to, but yet it was not one of my changes and shouldn't have been merged anyways. It was easy to fix (remove the extra lines that weren't part of my changes), but it still makes me worry about merging automatically in SVN.
| [
"This only happens with conflicts - basically svn tried to merge the change in, but (roughly speaking) saw the change as:\nAdd\n2008-08-06 Mike Stone <myemail>\n\n* changed_file: Details.\n\nbefore\n2008-08-06 Someone Else <their_email>\n\nAnd it couldn't find the Someone Else line while doing the merge, so chucked that bit in for context when putting in the conflict. If it was a non-conflicting merge only the changes you expected would have been applied.\n",
"There's not really enough information to go on here.\nsvn merge -r 67212:67213 https://my.svn.repository/trunk .\n\nwill merge any files changed in the revision 67212 in the folder /trunk on the repository and merge them into your current working directory. If you do:\nsvn log -r 67212\n\nWhat files does it show changed? Merge will only pull changes from the first argument, and apply them to the second. It does not upload back to the server in the first argument.\nIf this doesn't answer your question, could you post more details as to what exactly is happening?\n"
] | [
2,
0
] | [] | [] | [
"merge",
"svn"
] | stackoverflow_0000004072_merge_svn.txt |
Q:
C# .NET listing contents of remote files
Is it possible in .NET to list files on a remote location like an URL? Much in the same way the System.IO classes work. All I need is the URLs to images that are on a remote server.
A:
Short answer: No, unless you have more control over that web-server
Long answer: Here are possible solutions...
You will need server-side script that will do it locally and output this list in your preferred format.
Most of the web-servers implement default file-browsing pages, so you could theoretically parse those but this solution will be very fragile and not very portable even between different versions of the same web-server.
If you have FTP access...
A:
Is it possible in .NET to list files on a remote location like an URL?
You should specify which protocol we're talking about.
For HTTP, lubos hasko provided the answer: no. HTTP has no concept of files; only of resources. If you have control over the web server, you can ask it to provide a directory listing, or, better yet, you can write code that lists the directory server-side for you. Without such control, you have to rely on the server to provide a listing, which 1) may be disabled for security reasons, 2) is non-standardized in its format, 3) will be, like lubos said, fragile to parse ("scrape").
If you mean / if the server provides a protocol intended for file transfer, such as FTP, SMB/CIFS, etc., it'll be a lot easier. For example, for FTP, you'll want to look into WebRequestMethods.Ftp.ListDirectoryDetails.
| C# .NET listing contents of remote files | Is it possible in .NET to list files on a remote location like an URL? Much in the same way the System.IO classes work. All I need is the URLs to images that are on a remote server.
| [
"Short answer: No, unless you have more control over that web-server\nLong answer: Here are possible solutions...\n\nYou will need server-side script that will do it locally and output this list in your preferred format.\nMost of the web-servers implement default file-browsing pages, so you could theoretically parse those but this solution will be very fragile and not very portable even between different versions of the same web-server.\nIf you have FTP access...\n\n",
"\nIs it possible in .NET to list files on a remote location like an URL?\n\nYou should specify which protocol we're talking about.\nFor HTTP, lubos hasko provided the answer: no. HTTP has no concept of files; only of resources. If you have control over the web server, you can ask it to provide a directory listing, or, better yet, you can write code that lists the directory server-side for you. Without such control, you have to rely on the server to provide a listing, which 1) may be disabled for security reasons, 2) is non-standardized in its format, 3) will be, like lubos said, fragile to parse (\"scrape\").\nIf you mean / if the server provides a protocol intended for file transfer, such as FTP, SMB/CIFS, etc., it'll be a lot easier. For example, for FTP, you'll want to look into WebRequestMethods.Ftp.ListDirectoryDetails.\n"
] | [
3,
1
] | [] | [] | [
".net"
] | stackoverflow_0000013655_.net.txt |
Q:
Making code work with register_globals turned off
I have inherited some legacy PHP code what was written back when it was standard practice to use register_globals (As of PHP 4.2.0, this directive defaults to off, released 22. Apr 2002).
We know now that it is bad for security to have it enabled. The problem is how do I find all the places in the code where I need to use $_GET or $_POST? My only thought was to set the error reporting to warn about uninitialized variables and then test each part of the site. Is there an easier way? Will I have to test each code path in the site or will PHP give a warning on a file basis?
A:
If you set error reporting to E_ALL, it warns in the error log about undefined variables complete with filename and line number (assuming you are logging to a file). However, it will warn only if when it comes across an undefined variable, so I think you will have to test each code path. Running php from the command line doesn't seem to help also.
There is a debugging tool named xdebug, haven't tried it, but maybe that can be useful?
A:
I wrote a script using the built-in Tokenizer functions. Its pretty rough but it worked for the code base I was working on. I believe you could also use CodeSniffer.
A:
You could manually 'fake' the register globals effect but add some security. (I partly grabbed this from the osCommerce fork called xoops)
// Detect bad global variables
$bad_global_list = array('GLOBALS', '_SESSION', 'HTTP_SESSION_VARS', '_GET', 'HTTP_GET_VARS', '_POST', 'HTTP_POST_VARS', '_COOKIE', 'HTTP_COOKIE_VARS', '_REQUEST', '_SERVER', 'HTTP_SERVER_VARS', '_ENV', 'HTTP_ENV_VARS', '_FILES', 'HTTP_POST_FILES');
foreach ($bad_global_list as $bad_global ) {
if ( isset( $_REQUEST[$bad_global] ) ) {
die('Bad Global');
}
}
// Make global variables
foreach ($_REQUEST as $name -> $value) {
$$name = $value; // Creates a varable nammed $name equal to $value.
}
Though you'd want to tweak it to make your code more secure, at least by adding your global configuration variables (like the path and base url) to the bad globals list.
You could also use it to easily compile a list of all used get/post variables to help you eventually replace all occurrences of, say $return_url, with $_REQUEST['return_url];
A:
I know that there's a way to set php.ini values for that script with a certain command, I thus went looking and found this too - Goto last post on page
I also found the following post which may be of use - Goto last post on the page
I will add to this more if nobody has found an answer but I must now catch a train.
| Making code work with register_globals turned off | I have inherited some legacy PHP code what was written back when it was standard practice to use register_globals (As of PHP 4.2.0, this directive defaults to off, released 22. Apr 2002).
We know now that it is bad for security to have it enabled. The problem is how do I find all the places in the code where I need to use $_GET or $_POST? My only thought was to set the error reporting to warn about uninitialized variables and then test each part of the site. Is there an easier way? Will I have to test each code path in the site or will PHP give a warning on a file basis?
| [
"If you set error reporting to E_ALL, it warns in the error log about undefined variables complete with filename and line number (assuming you are logging to a file). However, it will warn only if when it comes across an undefined variable, so I think you will have to test each code path. Running php from the command line doesn't seem to help also.\nThere is a debugging tool named xdebug, haven't tried it, but maybe that can be useful?\n",
"I wrote a script using the built-in Tokenizer functions. Its pretty rough but it worked for the code base I was working on. I believe you could also use CodeSniffer.\n",
"You could manually 'fake' the register globals effect but add some security. (I partly grabbed this from the osCommerce fork called xoops)\n// Detect bad global variables\n$bad_global_list = array('GLOBALS', '_SESSION', 'HTTP_SESSION_VARS', '_GET', 'HTTP_GET_VARS', '_POST', 'HTTP_POST_VARS', '_COOKIE', 'HTTP_COOKIE_VARS', '_REQUEST', '_SERVER', 'HTTP_SERVER_VARS', '_ENV', 'HTTP_ENV_VARS', '_FILES', 'HTTP_POST_FILES');\nforeach ($bad_global_list as $bad_global ) {\n if ( isset( $_REQUEST[$bad_global] ) ) {\n die('Bad Global');\n }\n}\n\n// Make global variables\nforeach ($_REQUEST as $name -> $value) {\n $$name = $value; // Creates a varable nammed $name equal to $value.\n}\n\nThough you'd want to tweak it to make your code more secure, at least by adding your global configuration variables (like the path and base url) to the bad globals list.\nYou could also use it to easily compile a list of all used get/post variables to help you eventually replace all occurrences of, say $return_url, with $_REQUEST['return_url]; \n",
"I know that there's a way to set php.ini values for that script with a certain command, I thus went looking and found this too - Goto last post on page\nI also found the following post which may be of use - Goto last post on the page\nI will add to this more if nobody has found an answer but I must now catch a train.\n"
] | [
6,
3,
2,
0
] | [] | [] | [
"php",
"register_globals"
] | stackoverflow_0000006633_php_register_globals.txt |
Q:
OpenID Attribute Exchange - should I use it?
My website will be using only OpenID for authentication. I'd like to pull user details down via attribute exchange, but attribute exchange seems to have caused a lot of grief for StackOverflow.
What is the current state of play in the industry? Does any OpenID provider do a decent job of attribute exchange?
Should I just steer away from OpenID attribute exchange altogether?
How can I deal with inconsistent support for functionality?
A:
Here on Stack Overflow, we're just using the Simple Registration extension for now, as there were some issues with Attribute Exchange (AX).
The biggest was OpenID Providers (OP) not agreeing on which attribute type urls to use. The finalized spec for AX says that attribute urls should come from http://www.axschema.org/ However, some OPs, especially our favorite http://myopenid.com, recognize other urls. I wasn't going to keep a list of which ones were naughty and which were nice!
The other problem was that most of the OPs I tried just didn't return information when queried with AX - I might have been doing something wrong (happens quite frequently :) ), but I had made relevant details public on my profiles and we're using the latest, most excellent .NET library, DotNetOpenId.
We'll definitely revisit AX here on Stack Overflow when we get a little more time, as a seamless user experience is very important to us!
A:
While Attribute Exchange has it's problems (I'm sure someone from SO can tell you more), it does have a lot of benefits. To some extent it depends on whether you really need it or not. Simple Registration seems to do that job, and it might make sense to just ask the user for certain values. Use common sense and don't get stuck shoving everything down the One True Way.
| OpenID Attribute Exchange - should I use it? | My website will be using only OpenID for authentication. I'd like to pull user details down via attribute exchange, but attribute exchange seems to have caused a lot of grief for StackOverflow.
What is the current state of play in the industry? Does any OpenID provider do a decent job of attribute exchange?
Should I just steer away from OpenID attribute exchange altogether?
How can I deal with inconsistent support for functionality?
| [
"Here on Stack Overflow, we're just using the Simple Registration extension for now, as there were some issues with Attribute Exchange (AX).\nThe biggest was OpenID Providers (OP) not agreeing on which attribute type urls to use. The finalized spec for AX says that attribute urls should come from http://www.axschema.org/ However, some OPs, especially our favorite http://myopenid.com, recognize other urls. I wasn't going to keep a list of which ones were naughty and which were nice!\nThe other problem was that most of the OPs I tried just didn't return information when queried with AX - I might have been doing something wrong (happens quite frequently :) ), but I had made relevant details public on my profiles and we're using the latest, most excellent .NET library, DotNetOpenId.\nWe'll definitely revisit AX here on Stack Overflow when we get a little more time, as a seamless user experience is very important to us!\n",
"While Attribute Exchange has it's problems (I'm sure someone from SO can tell you more), it does have a lot of benefits. To some extent it depends on whether you really need it or not. Simple Registration seems to do that job, and it might make sense to just ask the user for certain values. Use common sense and don't get stuck shoving everything down the One True Way.\n"
] | [
17,
1
] | [] | [] | [
"authentication",
"openid"
] | stackoverflow_0000011903_authentication_openid.txt |
Q:
VisualSVN undelete with TortoiseSVN
Using TortoiseSVN against VisualSVN I delete a source file that I should not have deleted. Now this isn't a train smash because I can get the file back from the daily backup. However I would like to undelete it from SVN (VisualSVN) so that I can get the history back. However I can't work out how to do that.
Anybody know how to undelete a file from VisualSVN either using the VisualSVN interface or the latest version of TortoiseSVN?
A:
What you have to do is the following:
Right click on the folder where you think it is.
Choose Show Log under TortioseSVN
Find the checkin that the file was deleted in
Go down the list and find the file
Select Revert changes for this version to undelete.
A:
Did you only delete it locally, or did you commit the deletion? In the latter case, try checking out the previous revision.
| VisualSVN undelete with TortoiseSVN | Using TortoiseSVN against VisualSVN I delete a source file that I should not have deleted. Now this isn't a train smash because I can get the file back from the daily backup. However I would like to undelete it from SVN (VisualSVN) so that I can get the history back. However I can't work out how to do that.
Anybody know how to undelete a file from VisualSVN either using the VisualSVN interface or the latest version of TortoiseSVN?
| [
"What you have to do is the following:\n\nRight click on the folder where you think it is.\nChoose Show Log under TortioseSVN\nFind the checkin that the file was deleted in\nGo down the list and find the file\nSelect Revert changes for this version to undelete.\n\n",
"Did you only delete it locally, or did you commit the deletion? In the latter case, try checking out the previous revision.\n"
] | [
12,
1
] | [] | [] | [
"svn",
"tortoisesvn",
"visualsvn_server"
] | stackoverflow_0000013848_svn_tortoisesvn_visualsvn_server.txt |
Q:
How to keyboard down or up between dropdown "options"?
I have a custom built ajax [div] based dynamic dropdown.
I have an [input] box which; onkeyup, runs an Ajax search which returns results in divs and are drawn back in using innerHTML. These divs all have highlights onmouseover so, a typical successful search yields the following structure (pardon the semi-code):
[input]
[div id=results] //this gets overwritten contantly by my AJAX function
[div id=result1 onmouseover=highlight onclick=input.value=result1]
[div id=result2 onmouseover=highlight onclick=input.value=result2]
[div id=result2 onmouseover=highlight onclick=input.value=result2]
[/div]
It works.
However, I'm missing the important functions behind regular HTML elements. I can't keyboard down or up between "options".
I know javascript handles keyboard events but; I haven't been able to find a good guide. (Of course, the follow-up question will end up being: can I use <ENTER> to trigger that onclick event?)
A:
What you need to do is attach event listeners to the div with id="results". You can do this by adding onkeyup, onkeydown, etc. attributes to the div when you create it or you can attach these using JavaScript.
My recommendation would be that you use an AJAX library like YUI, jQuery, Prototype, etc. for two reasons:
It sounds like you are trying to create an Auto Complete control which is something most AJAX libaries should provide. If you can use an existing component you'll save yourself a lot of time.
Even if you don't want to use the control provided by a library, all libraries provide event libraries that help to hide the differences between the event APIs provided by different browsers.
Forget addEvent, use Yahoo!’s Event Utility provides a good summary of what an event library should provide for you. I'm pretty sure that the event libraries provided by jQuery, Prototype, et. al. provide similar features.
If that article goes over your head have a look at this documentation first and then re-read the original article (I found the article made much more sense after I'd used the event library).
A couple of other things:
Using JavaScript gives you much more control than writing onkeyup etc. attributes into your HTML. Unless you want to do something really simple I would use JavaScript.
If you write your own code to handle keyboard events a good key code reference is really handy.
A:
Off the top of my head, I would think that you'd need to maintain some form of a data structure in the JavaScript that reflects the items in the current dropdown list. You'd also need a reference to the currently active/selected item.
Each time keyup or keydown is fired, update the reference to the active/selected item in the data structure. To provide highlighting information on the UI, add or remove a class name that is styled via CSS based on if the item is active/selected or not.
Also, this isn't a biggy, but innerHTML is not really standard (look into createTextNode(), createElement(), and appendChild() for standard ways of creating data). You may also want to see about attaching event handlers in the JavaScript rather than doing so in an HTML attribute.
| How to keyboard down or up between dropdown "options"? | I have a custom built ajax [div] based dynamic dropdown.
I have an [input] box which; onkeyup, runs an Ajax search which returns results in divs and are drawn back in using innerHTML. These divs all have highlights onmouseover so, a typical successful search yields the following structure (pardon the semi-code):
[input]
[div id=results] //this gets overwritten contantly by my AJAX function
[div id=result1 onmouseover=highlight onclick=input.value=result1]
[div id=result2 onmouseover=highlight onclick=input.value=result2]
[div id=result2 onmouseover=highlight onclick=input.value=result2]
[/div]
It works.
However, I'm missing the important functions behind regular HTML elements. I can't keyboard down or up between "options".
I know javascript handles keyboard events but; I haven't been able to find a good guide. (Of course, the follow-up question will end up being: can I use <ENTER> to trigger that onclick event?)
| [
"What you need to do is attach event listeners to the div with id=\"results\". You can do this by adding onkeyup, onkeydown, etc. attributes to the div when you create it or you can attach these using JavaScript.\nMy recommendation would be that you use an AJAX library like YUI, jQuery, Prototype, etc. for two reasons:\n\nIt sounds like you are trying to create an Auto Complete control which is something most AJAX libaries should provide. If you can use an existing component you'll save yourself a lot of time.\nEven if you don't want to use the control provided by a library, all libraries provide event libraries that help to hide the differences between the event APIs provided by different browsers.\n\nForget addEvent, use Yahoo!’s Event Utility provides a good summary of what an event library should provide for you. I'm pretty sure that the event libraries provided by jQuery, Prototype, et. al. provide similar features. \nIf that article goes over your head have a look at this documentation first and then re-read the original article (I found the article made much more sense after I'd used the event library).\nA couple of other things:\n\nUsing JavaScript gives you much more control than writing onkeyup etc. attributes into your HTML. Unless you want to do something really simple I would use JavaScript.\nIf you write your own code to handle keyboard events a good key code reference is really handy.\n\n",
"Off the top of my head, I would think that you'd need to maintain some form of a data structure in the JavaScript that reflects the items in the current dropdown list. You'd also need a reference to the currently active/selected item. \nEach time keyup or keydown is fired, update the reference to the active/selected item in the data structure. To provide highlighting information on the UI, add or remove a class name that is styled via CSS based on if the item is active/selected or not.\nAlso, this isn't a biggy, but innerHTML is not really standard (look into createTextNode(), createElement(), and appendChild() for standard ways of creating data). You may also want to see about attaching event handlers in the JavaScript rather than doing so in an HTML attribute.\n"
] | [
9,
1
] | [] | [] | [
"dom",
"events",
"javascript",
"keyboard"
] | stackoverflow_0000004884_dom_events_javascript_keyboard.txt |
Q:
Python Regular Expressions to implement string unescaping
I am trying to implement string unescaping with Python regex and backreferences, and it doesn't seem to want to work very well. I'm sure it's something I'm doing wrong but I can't figure out what...
>>> import re
>>> mystring = r"This is \n a test \r"
>>> p = re.compile( "\\\\(\\S)" )
>>> p.sub( "\\1", mystring )
'This is n a test r'
>>> p.sub( "\\\\\\1", mystring )
'This is \\n a test \\r'
>>> p.sub( "\\\\1", mystring )
'This is \\1 a test \\1'
I'd like to replace \\[char] with \[char], but backreferences in Python don't appear to follow the same rules they do in every other implementation I've ever used. Could someone shed some light?
A:
Isn't that what Anders' second example does?
In 2.5 there's also a string-escape encoding you can apply:
>>> mystring = r"This is \n a test \r"
>>> mystring.decode('string-escape')
'This is \n a test \r'
>>> print mystring.decode('string-escape')
This is
a test
>>>
A:
Well, I think you might have missed the r or miscounted the backslashes...
"\\n" == r"\n"
>>> import re
>>> mystring = r"This is \\n a test \\r"
>>> p = re.compile( r"[\\][\\](.)" )
>>> print p.sub( r"\\\1", mystring )
This is \n a test \r
>>>
Which, if I understood is what was requested.
I suspect the more common request is this:
>>> d = {'n':'\n', 'r':'\r', 'f':'\f'}
>>> p = re.compile(r"[\\]([nrfv])")
>>> print p.sub(lambda mo: d[mo.group(1)], mystring)
This is \
a test \
>>>
The interested student should also read Ken Thompson's Reflections on Trusting Trust", wherein our hero uses a similar example to explain the perils of trusting compilers you haven't bootstrapped from machine code yourself.
A:
The idea is that I'll read in an escaped string, and unescape it (a feature notably lacking from Python, which you shouldn't need to resort to regular expressions for in the first place). Unfortunately I'm not being tricked by the backslashes...
Another illustrative example:
>>> mystring = r"This is \n ridiculous"
>>> print mystring
This is \n ridiculous
>>> p = re.compile( r"\\(\S)" )
>>> print p.sub( 'bloody', mystring )
This is bloody ridiculous
>>> print p.sub( r'\1', mystring )
This is n ridiculous
>>> print p.sub( r'\\1', mystring )
This is \1 ridiculous
>>> print p.sub( r'\\\1', mystring )
This is \n ridiculous
What I'd like it to print is
This is
ridiculous
A:
You are being tricked by Python's representation of the result string. The Python expression:
'This is \\n a test \\r'
represents the string
This is \n a test \r
which is I think what you wanted. Try adding 'print' in front of each of your p.sub() calls to print the actual string returned instead of a Python representation of the string.
>>> mystring = r"This is \n a test \r"
>>> mystring
'This is \\n a test \\r'
>>> print mystring
This is \n a test \r
A:
Mark; his second example requires every escaped character thrown into an array initially, which generates a KeyError if the escape sequence happens not to be in the array. It will die on anything but the three characters provided (give \v a try), and enumerating every possible escape sequence every time you want to unescape a string (or keeping a global array) is a really bad solution. Analogous to PHP, that's using preg_replace_callback() with a lambda instead of preg_replace(), which is utterly unnecessary in this situation.
I'm sorry if I'm coming off as a dick about it, I'm just utterly frustrated with Python. This is supported by every other regular expression engine I've ever used, and I can't understand why this wouldn't work.
Thank you for responding; the string.decode('string-escape') function is precisely what i was looking for initially. If someone has a general solution to the regex backreference problem, feel free to post it and I'll accept that as an answer as well.
| Python Regular Expressions to implement string unescaping | I am trying to implement string unescaping with Python regex and backreferences, and it doesn't seem to want to work very well. I'm sure it's something I'm doing wrong but I can't figure out what...
>>> import re
>>> mystring = r"This is \n a test \r"
>>> p = re.compile( "\\\\(\\S)" )
>>> p.sub( "\\1", mystring )
'This is n a test r'
>>> p.sub( "\\\\\\1", mystring )
'This is \\n a test \\r'
>>> p.sub( "\\\\1", mystring )
'This is \\1 a test \\1'
I'd like to replace \\[char] with \[char], but backreferences in Python don't appear to follow the same rules they do in every other implementation I've ever used. Could someone shed some light?
| [
"Isn't that what Anders' second example does?\nIn 2.5 there's also a string-escape encoding you can apply:\n>>> mystring = r\"This is \\n a test \\r\"\n>>> mystring.decode('string-escape')\n'This is \\n a test \\r'\n>>> print mystring.decode('string-escape')\nThis is \n a test \n>>> \n\n",
"Well, I think you might have missed the r or miscounted the backslashes...\n\"\\\\n\" == r\"\\n\"\n\n>>> import re\n>>> mystring = r\"This is \\\\n a test \\\\r\"\n>>> p = re.compile( r\"[\\\\][\\\\](.)\" )\n>>> print p.sub( r\"\\\\\\1\", mystring )\nThis is \\n a test \\r\n>>>\n\nWhich, if I understood is what was requested.\nI suspect the more common request is this:\n>>> d = {'n':'\\n', 'r':'\\r', 'f':'\\f'}\n>>> p = re.compile(r\"[\\\\]([nrfv])\")\n>>> print p.sub(lambda mo: d[mo.group(1)], mystring)\nThis is \\\n a test \\\n>>>\n\nThe interested student should also read Ken Thompson's Reflections on Trusting Trust\", wherein our hero uses a similar example to explain the perils of trusting compilers you haven't bootstrapped from machine code yourself.\n",
"The idea is that I'll read in an escaped string, and unescape it (a feature notably lacking from Python, which you shouldn't need to resort to regular expressions for in the first place). Unfortunately I'm not being tricked by the backslashes...\nAnother illustrative example:\n>>> mystring = r\"This is \\n ridiculous\"\n>>> print mystring\nThis is \\n ridiculous\n>>> p = re.compile( r\"\\\\(\\S)\" )\n>>> print p.sub( 'bloody', mystring )\nThis is bloody ridiculous\n>>> print p.sub( r'\\1', mystring )\nThis is n ridiculous\n>>> print p.sub( r'\\\\1', mystring )\nThis is \\1 ridiculous\n>>> print p.sub( r'\\\\\\1', mystring )\nThis is \\n ridiculous\n\nWhat I'd like it to print is\nThis is \nridiculous\n\n",
"You are being tricked by Python's representation of the result string. The Python expression:\n'This is \\\\n a test \\\\r'\n\nrepresents the string\nThis is \\n a test \\r\n\nwhich is I think what you wanted. Try adding 'print' in front of each of your p.sub() calls to print the actual string returned instead of a Python representation of the string.\n>>> mystring = r\"This is \\n a test \\r\"\n>>> mystring\n'This is \\\\n a test \\\\r'\n>>> print mystring\nThis is \\n a test \\r\n\n",
"Mark; his second example requires every escaped character thrown into an array initially, which generates a KeyError if the escape sequence happens not to be in the array. It will die on anything but the three characters provided (give \\v a try), and enumerating every possible escape sequence every time you want to unescape a string (or keeping a global array) is a really bad solution. Analogous to PHP, that's using preg_replace_callback() with a lambda instead of preg_replace(), which is utterly unnecessary in this situation.\nI'm sorry if I'm coming off as a dick about it, I'm just utterly frustrated with Python. This is supported by every other regular expression engine I've ever used, and I can't understand why this wouldn't work.\nThank you for responding; the string.decode('string-escape') function is precisely what i was looking for initially. If someone has a general solution to the regex backreference problem, feel free to post it and I'll accept that as an answer as well.\n"
] | [
10,
3,
1,
0,
0
] | [] | [] | [
"backreference",
"python",
"regex"
] | stackoverflow_0000013791_backreference_python_regex.txt |
Q:
quoting System.DirectoryServices.ResultPropertyCollection
I'm missing something here:
$objSearcher = New-Object System.DirectoryServices.DirectorySearcher
$objSearcher.SearchRoot = New-Object System.DirectoryServices.DirectoryEntry
$objSearcher.Filter = ("(objectclass=computer)")
$computers = $objSearcher.findall()
So the question is why do the two following outputs differ?
$computers | %{
"Server name in quotes $_.properties.name"
"Server name not in quotes " + $_.properties.name
}
PS> $computers[0] | %{"$_.properties.name"; $_.properties.name}
System.DirectoryServices.SearchResult.properties.name
GORILLA
A:
When you included $_.properties.name in the string, it was returning the type name of the property. When a variable is included in a string and the string is evaluated, it calls the ToString method on that object referenced by the variable (not including the members specified after).
In this case, the ToString method is returning the type name. You can force the evaluation of the variable and members similar to what EBGreen suggested, but by using
"Server name in quotes $($_.properties.name)"
In the other scenario PowerShell is evaluating the variable and members specified first and then adding it to the previous string.
You are right that you are getting back a collection of properties. If you pipe $computer[0].properties to get-member, you can explore the object model right from the command line.
The important part is below.
TypeName: System.DirectoryServices.ResultPropertyCollection
Name MemberType Definition
Values Property System.Collections.ICollection Values {get;}
A:
I believe it has to do with the way that PS interpolates information in the "". Try this:
"Server name in quotes $($_.properties).name"
Or you may even need one more set of $(). I'm not somewhere that I can test it at right now.
A:
Close-- The below works correctly, but I'd be interested if anyone has a deeper explanation.
PS C:\> $computers[0] | %{ "$_.properties.name"; "$($_.properties.name)" }
System.DirectoryServices.SearchResult.properties.name
GORILLA
So it would seem that $_.properties.name doesn't deference like I expected it to. If I'm visualizing properly, the fact that the name property is multivalued causes it to return an array. Which (I think) would explain why the following works:
$computers[0] | %{ $_.properties.name[0]}
If "name" were a string this should return the first character, but because it's an array it returns the first string.
| quoting System.DirectoryServices.ResultPropertyCollection | I'm missing something here:
$objSearcher = New-Object System.DirectoryServices.DirectorySearcher
$objSearcher.SearchRoot = New-Object System.DirectoryServices.DirectoryEntry
$objSearcher.Filter = ("(objectclass=computer)")
$computers = $objSearcher.findall()
So the question is why do the two following outputs differ?
$computers | %{
"Server name in quotes $_.properties.name"
"Server name not in quotes " + $_.properties.name
}
PS> $computers[0] | %{"$_.properties.name"; $_.properties.name}
System.DirectoryServices.SearchResult.properties.name
GORILLA
| [
"When you included $_.properties.name in the string, it was returning the type name of the property. When a variable is included in a string and the string is evaluated, it calls the ToString method on that object referenced by the variable (not including the members specified after). \nIn this case, the ToString method is returning the type name. You can force the evaluation of the variable and members similar to what EBGreen suggested, but by using \n\"Server name in quotes $($_.properties.name)\" \n\nIn the other scenario PowerShell is evaluating the variable and members specified first and then adding it to the previous string.\nYou are right that you are getting back a collection of properties. If you pipe $computer[0].properties to get-member, you can explore the object model right from the command line. \nThe important part is below.\n\nTypeName: System.DirectoryServices.ResultPropertyCollection\nName MemberType Definition\n\nValues Property System.Collections.ICollection Values {get;}\n\n",
"I believe it has to do with the way that PS interpolates information in the \"\". Try this:\n\"Server name in quotes $($_.properties).name\" \nOr you may even need one more set of $(). I'm not somewhere that I can test it at right now.\n",
"Close-- The below works correctly, but I'd be interested if anyone has a deeper explanation.\nPS C:\\> $computers[0] | %{ \"$_.properties.name\"; \"$($_.properties.name)\" }\nSystem.DirectoryServices.SearchResult.properties.name\nGORILLA\n\nSo it would seem that $_.properties.name doesn't deference like I expected it to. If I'm visualizing properly, the fact that the name property is multivalued causes it to return an array. Which (I think) would explain why the following works:\n$computers[0] | %{ $_.properties.name[0]}\n\nIf \"name\" were a string this should return the first character, but because it's an array it returns the first string. \n"
] | [
1,
0,
0
] | [] | [] | [
".net",
"active_directory",
"powershell",
"scripting"
] | stackoverflow_0000013753_.net_active_directory_powershell_scripting.txt |
Q:
How big would such a database be?
I'm trying to figure out how big a certain database would be (it hasn't been created yet). I know how many rows and what the tables will be. Is there a feature in Oracle that will tell me the size of such a theoretical database? Is there a known math formula I can use? I know there is a feature to determine the size of an existing database, but I want to know how big it will be before I create it.
A:
You might try prototyping your design - create an initial version of the database and write some scripts (or use a tool) to populate the tables with a reasonable amount of data. Then you will know for sure how much space X rows takes up. If it's too much, you can go back to the drawing board with your design. I know you want a figure before creating the database, but you'll never be able to account for everything that's going on with the physical data files under the hood.
A:
you can from the size of the data types for the columns in a table. You can then get a rough estimate of the size of a row in that table. then for 1 to n tables, then for 1 row in 1 table for x rows in x tables = estimate of the database for a given rowsize.
Long handed I know but this is how i normally do this.
A:
To be accurate, this can get really complex. For example, this is how you do it on MS SQL Server:
http://msdn.microsoft.com/en-us/library/aa933068(SQL.80).aspx
A:
You also need to include indexes in your estimates. I've seen systems where the indexes were as big as the data. The only way I would trust the answer is to do prototyping like Eric Z Beard suggests. Different datbase systems have different overhead, but they all have it.
A:
Having an exact size wasn't too important, so I went with littlegeek's method. I figured out what my tables and columns would be, and looked up the sizes of the data types, then did some good 'ole multiplying.
| How big would such a database be? | I'm trying to figure out how big a certain database would be (it hasn't been created yet). I know how many rows and what the tables will be. Is there a feature in Oracle that will tell me the size of such a theoretical database? Is there a known math formula I can use? I know there is a feature to determine the size of an existing database, but I want to know how big it will be before I create it.
| [
"You might try prototyping your design - create an initial version of the database and write some scripts (or use a tool) to populate the tables with a reasonable amount of data. Then you will know for sure how much space X rows takes up. If it's too much, you can go back to the drawing board with your design. I know you want a figure before creating the database, but you'll never be able to account for everything that's going on with the physical data files under the hood.\n",
"you can from the size of the data types for the columns in a table. You can then get a rough estimate of the size of a row in that table. then for 1 to n tables, then for 1 row in 1 table for x rows in x tables = estimate of the database for a given rowsize.\nLong handed I know but this is how i normally do this.\n",
"To be accurate, this can get really complex. For example, this is how you do it on MS SQL Server:\nhttp://msdn.microsoft.com/en-us/library/aa933068(SQL.80).aspx\n",
"You also need to include indexes in your estimates. I've seen systems where the indexes were as big as the data. The only way I would trust the answer is to do prototyping like Eric Z Beard suggests. Different datbase systems have different overhead, but they all have it.\n",
"Having an exact size wasn't too important, so I went with littlegeek's method. I figured out what my tables and columns would be, and looked up the sizes of the data types, then did some good 'ole multiplying.\n"
] | [
7,
5,
1,
1,
0
] | [] | [] | [
"database",
"oracle"
] | stackoverflow_0000011055_database_oracle.txt |
Q:
How to create a C++ Boost undirected graph and traverse it in depth first search (DFS) order?
How to create a C++ Boost undirected graph and traverse it in depth first search (DFS) order?
A:
// Boost DFS example on an undirected graph.
// Create a sample graph, traverse its nodes
// in DFS order and print out their values.
#include <boost/graph/adjacency_list.hpp>
#include <boost/graph/depth_first_search.hpp>
#include <iostream>
using namespace std;
typedef boost::adjacency_list<boost::listS, boost::vecS, boost::undirectedS> MyGraph;
typedef boost::graph_traits<MyGraph>::vertex_descriptor MyVertex;
class MyVisitor : public boost::default_dfs_visitor
{
public:
void discover_vertex(MyVertex v, const MyGraph& g) const
{
cerr << v << endl;
return;
}
};
int main()
{
MyGraph g;
boost::add_edge(0, 1, g);
boost::add_edge(0, 2, g);
boost::add_edge(1, 2, g);
boost::add_edge(1, 3, g);
MyVisitor vis;
boost::depth_first_search(g, boost::visitor(vis));
return 0;
}
| How to create a C++ Boost undirected graph and traverse it in depth first search (DFS) order? | How to create a C++ Boost undirected graph and traverse it in depth first search (DFS) order?
| [
"// Boost DFS example on an undirected graph.\n// Create a sample graph, traverse its nodes\n// in DFS order and print out their values.\n\n#include <boost/graph/adjacency_list.hpp>\n#include <boost/graph/depth_first_search.hpp>\n#include <iostream>\nusing namespace std;\n\ntypedef boost::adjacency_list<boost::listS, boost::vecS, boost::undirectedS> MyGraph;\ntypedef boost::graph_traits<MyGraph>::vertex_descriptor MyVertex;\n\nclass MyVisitor : public boost::default_dfs_visitor\n{\npublic:\n void discover_vertex(MyVertex v, const MyGraph& g) const\n {\n cerr << v << endl;\n return;\n }\n};\n\nint main()\n{\n MyGraph g;\n boost::add_edge(0, 1, g);\n boost::add_edge(0, 2, g);\n boost::add_edge(1, 2, g);\n boost::add_edge(1, 3, g);\n\n MyVisitor vis;\n boost::depth_first_search(g, boost::visitor(vis));\n\n return 0;\n}\n\n"
] | [
36
] | [] | [] | [
"boost_graph",
"c++"
] | stackoverflow_0000014126_boost_graph_c++.txt |
Q:
How would you go about evaluating a programmer?
A few weeks ago, I was assigned to evaluate all our programmers. I'm very uncomfortable with this since I was the one who taught everyone the shop's programming language (they all got out of college not knowing the language and as luck would have it, I'm very proficient with it.). On the evaluation, I was very biased on their performance (perfect scores).
I'm glad that our programming shop doesn't require an average performance level but I heard horror stories of shops which do require an average level.
My question are as follows:
As a programmer, what evaluation questions would you like to see?
As a manager, what evaluation questions would you like to see?
As the evaluator, how can you prevent bias in your evaluation?
I would love to remove the evaluation test. Is there any advantages to having an evaluation test? Any disadvantage?
A:
Gets things done is really all you need to evaluate a developer. After that you look at the quality that the developer generates. Do they write unit tests and believe in testing and being responsible for the code they generate? Do they take initiative to fix bugs without being assigned them? Are they passionate about coding? Are they always constantly learning, trying to find better ways to accomplish a task or make a process better? These questions are pretty much how I judge developers directly under me. If they are not directly under you and you are not a direct report for them, then you really shouldn't be evaluating them. If you are assigned in evaluating those programmers that aren't under you, then you need to be proactive to answer the above questions about them, which can be hard.
You can't remove the evaluation test. I know it can become tedious sometimes, but I actually enjoy doing it and it's invaluable for the developer you are evaluating. You need to be a manager that cares about how your developers do. You are a direct reflection on them and as they are of you. One question I always leave up to the developer is for them to evaluate me. The evaluation needs to be a two lane road.
I have to also evaluate off a cookie cutter list of questions, which I do, but I always add the above and try to make the evaluation fun and a learning exercise during the time I have the developer one on one, it is all about the developer you are reviewing.
A:
I would first consider not necessarily the number of lines of code, but the value of the code that the person adds as reflective of course to what they are assigned to do. Someone told to maintain code verses building a new app is very different. Also consider how the person uses new techniques to make the code relevant and updated? How maintainable is the code the person creates? Do they do things in a manner that is logical and understandable to the rest of the team? Does their coding improve the app or just wreck it? Last and not least does their coding improve over time?
A:
What about getting everyone's input? Everyone that a person is working with will have a unique insight into that person. One person might think someone is a slacker, while another person sees that they are spending a lot of time planning before they start coding, etc.
A:
What about getting everyone's input? Everyone that a person is working with will have a unique insight into that person.
That would work if (1) evaluation is conducted with open doors and (2) you've worked with that person on one project or even on the same module. As the person evaluating them, I couldn't judge the programmers who I didn't directly work with.
One person might think someone is a slacker, while another person sees that they are spending a lot of time planning before they start coding
Unfortunately, this is debatable. Someone who looks like a slacker might be in deep thoughts, or maybe not. And is someone who spend a long time planning, necessarily a bad programmer?
I believe a good evaluation question would be able to answer this.
| How would you go about evaluating a programmer? | A few weeks ago, I was assigned to evaluate all our programmers. I'm very uncomfortable with this since I was the one who taught everyone the shop's programming language (they all got out of college not knowing the language and as luck would have it, I'm very proficient with it.). On the evaluation, I was very biased on their performance (perfect scores).
I'm glad that our programming shop doesn't require an average performance level but I heard horror stories of shops which do require an average level.
My question are as follows:
As a programmer, what evaluation questions would you like to see?
As a manager, what evaluation questions would you like to see?
As the evaluator, how can you prevent bias in your evaluation?
I would love to remove the evaluation test. Is there any advantages to having an evaluation test? Any disadvantage?
| [
"Gets things done is really all you need to evaluate a developer. After that you look at the quality that the developer generates. Do they write unit tests and believe in testing and being responsible for the code they generate? Do they take initiative to fix bugs without being assigned them? Are they passionate about coding? Are they always constantly learning, trying to find better ways to accomplish a task or make a process better? These questions are pretty much how I judge developers directly under me. If they are not directly under you and you are not a direct report for them, then you really shouldn't be evaluating them. If you are assigned in evaluating those programmers that aren't under you, then you need to be proactive to answer the above questions about them, which can be hard.\nYou can't remove the evaluation test. I know it can become tedious sometimes, but I actually enjoy doing it and it's invaluable for the developer you are evaluating. You need to be a manager that cares about how your developers do. You are a direct reflection on them and as they are of you. One question I always leave up to the developer is for them to evaluate me. The evaluation needs to be a two lane road.\nI have to also evaluate off a cookie cutter list of questions, which I do, but I always add the above and try to make the evaluation fun and a learning exercise during the time I have the developer one on one, it is all about the developer you are reviewing.\n",
"I would first consider not necessarily the number of lines of code, but the value of the code that the person adds as reflective of course to what they are assigned to do. Someone told to maintain code verses building a new app is very different. Also consider how the person uses new techniques to make the code relevant and updated? How maintainable is the code the person creates? Do they do things in a manner that is logical and understandable to the rest of the team? Does their coding improve the app or just wreck it? Last and not least does their coding improve over time?\n",
"What about getting everyone's input? Everyone that a person is working with will have a unique insight into that person. One person might think someone is a slacker, while another person sees that they are spending a lot of time planning before they start coding, etc.\n",
"\nWhat about getting everyone's input? Everyone that a person is working with will have a unique insight into that person. \n\nThat would work if (1) evaluation is conducted with open doors and (2) you've worked with that person on one project or even on the same module. As the person evaluating them, I couldn't judge the programmers who I didn't directly work with.\n\nOne person might think someone is a slacker, while another person sees that they are spending a lot of time planning before they start coding\n\nUnfortunately, this is debatable. Someone who looks like a slacker might be in deep thoughts, or maybe not. And is someone who spend a long time planning, necessarily a bad programmer?\nI believe a good evaluation question would be able to answer this.\n"
] | [
11,
2,
0,
0
] | [] | [] | [
"evaluation"
] | stackoverflow_0000014106_evaluation.txt |
Q:
IE7: header above menu
I am having trouble with IE7. I have a header, which is an IMG. Under it I have a div that represents a menu, they have to be attached to each other without space in between. Both are 1000px width. In Opera and FireFox the header and the menu are neatly attached to each other. However, in IE7, there is a small space between the menu DIV and the IMG. I have tried explicitly defining padding and margin on the IMG, however it does not work. I have had this problem before, so it seems to be a IE7 quirk.
My HTML Code:
<div id="middle">
<img id="ctl00_headerHolder_headerImage" src="pictures/headers/header_home.jpg" style="border-width:0px;" />
<div id="ctl00_menuPanel" class="menu">
<a id="ctl00_home" href="Default.aspx" style="color:#FFCC33;">Home</a> |
<a id="ctl00_leden" href="Leden.aspx">Leden</a> |
<a id="ctl00_agenda" href="Agenda.aspx">Agenda</a> |
<a id="ctl00_fotos" href="Fotos.aspx">Foto's</a> |
<a id="ctl00_geschiedenis" href="Geschiedenis.aspx">Geschiedenis</a> |
<a id="ctl00_gastenboek" href="Gastenboek.aspx">Gastenboek</a>
</div>
</div>
A:
Try the IE Developer Toolbar, which will let you inspect what is going on with the elements and give you outlines of the areas covered. It might give you a better understanding of the problem.
A:
The solution:
img {
padding: 0px;
margin: 0px;
display: block;
}
display: block
A:
I run into this a lot. Rather than hunting down the specific behavior, try sanity checking by explicity setting padding and margin properties for img/div/etc selectors to 0, set border-style: none border-width: 0px border="0" etc.
IE Dev Toolbar is a must-have but whether it helps you with figuring out single-pixel issues is unlikely.
A:
Instead of resorting to display block, note that IE7 does some seriously odd things with whitespace; try removing the whitespace between the image and the div, and see what happens.
A:
CSS Resets (like the YUI Reset CSS) are great for this kind of thing. They reset paddings, margins, and other display properties on a lot of HTML elements to minimize the display differences.
A:
The solution...display: block
That question couldn't be answered properly without knowing the rendering mode that the browser was in; you need to tell people what doctype you have if you have CSS rendering issues. The image behaviour you refer to is different in quirks mode as opposed to standards mode. A minimal test case must include a full HTML document and the CSS to reproduce the problem. Please don't ask people for help without giving them the information they need to answer easily without wasting their time...
A:
The real solution:
#middle { font-size: 0; line-height: 0; }
| IE7: header above menu | I am having trouble with IE7. I have a header, which is an IMG. Under it I have a div that represents a menu, they have to be attached to each other without space in between. Both are 1000px width. In Opera and FireFox the header and the menu are neatly attached to each other. However, in IE7, there is a small space between the menu DIV and the IMG. I have tried explicitly defining padding and margin on the IMG, however it does not work. I have had this problem before, so it seems to be a IE7 quirk.
My HTML Code:
<div id="middle">
<img id="ctl00_headerHolder_headerImage" src="pictures/headers/header_home.jpg" style="border-width:0px;" />
<div id="ctl00_menuPanel" class="menu">
<a id="ctl00_home" href="Default.aspx" style="color:#FFCC33;">Home</a> |
<a id="ctl00_leden" href="Leden.aspx">Leden</a> |
<a id="ctl00_agenda" href="Agenda.aspx">Agenda</a> |
<a id="ctl00_fotos" href="Fotos.aspx">Foto's</a> |
<a id="ctl00_geschiedenis" href="Geschiedenis.aspx">Geschiedenis</a> |
<a id="ctl00_gastenboek" href="Gastenboek.aspx">Gastenboek</a>
</div>
</div>
| [
"Try the IE Developer Toolbar, which will let you inspect what is going on with the elements and give you outlines of the areas covered. It might give you a better understanding of the problem.\n",
"The solution: \nimg {\npadding: 0px;\nmargin: 0px;\ndisplay: block;\n}\n\ndisplay: block\n",
"I run into this a lot. Rather than hunting down the specific behavior, try sanity checking by explicity setting padding and margin properties for img/div/etc selectors to 0, set border-style: none border-width: 0px border=\"0\" etc.\nIE Dev Toolbar is a must-have but whether it helps you with figuring out single-pixel issues is unlikely.\n",
"Instead of resorting to display block, note that IE7 does some seriously odd things with whitespace; try removing the whitespace between the image and the div, and see what happens.\n",
"CSS Resets (like the YUI Reset CSS) are great for this kind of thing. They reset paddings, margins, and other display properties on a lot of HTML elements to minimize the display differences.\n",
"\n\nThe solution...display: block\n\n\nThat question couldn't be answered properly without knowing the rendering mode that the browser was in; you need to tell people what doctype you have if you have CSS rendering issues. The image behaviour you refer to is different in quirks mode as opposed to standards mode. A minimal test case must include a full HTML document and the CSS to reproduce the problem. Please don't ask people for help without giving them the information they need to answer easily without wasting their time...\n",
"The real solution:\n#middle { font-size: 0; line-height: 0; }\n\n"
] | [
3,
0,
0,
0,
0,
0,
0
] | [] | [] | [
"css",
"html",
"internet_explorer_7"
] | stackoverflow_0000013851_css_html_internet_explorer_7.txt |
Q:
How to learn ADO.NET
I need to learn ADO.NET to build applications based on MS Office. I have read a good deal about ADO.NET in the MSDN Library, but everything seems rather messy to me.
What are the basics one must figure out when using ADO.NET? I think a few key words will suffice to let me organize my learning.
A:
There are three key components (assuming ur using SQL server):
SQLConnection
SqlCommand
SqlDataReader
(if you're using something else, replace Sql with "Something", like MySqlConnection, OracleCommand)
Everything else is just built on top of that.
Example 1:
using (SqlConnection connection = new SqlConnection("CONNECTION STRING"))
using (SqlCommand command = new SqlCommand())
{
command.commandText = "SELECT Name FROM Users WHERE Status = @OnlineStatus";
command.Connection = connection;
command.Parameters.Add("@OnlineStatus", SqlDbType.Int).Value = 1; //replace with enum
connection.Open();
using (SqlDataReader dr = command.ExecuteReader))
{
List<string> onlineUsers = new List<string>();
while (dr.Read())
{
onlineUsers.Add(dr.GetString(0));
}
}
}
Example 2:
using (SqlConnection connection = new SqlConnection("CONNECTION STRING"))
using (SqlCommand command = new SqlCommand())
{
command.commandText = "DELETE FROM Users where Email = @Email";
command.Connection = connection;
command.Parameters.Add("@Email", SqlDbType.VarChar, 100).Value = "user@host.com";
connection.Open();
command.ExecuteNonQuery();
}
A:
Another way of getting a command object is to call connection.CreateCommand().
That way you shouldn't have to set the Connection property on the command object.
| How to learn ADO.NET | I need to learn ADO.NET to build applications based on MS Office. I have read a good deal about ADO.NET in the MSDN Library, but everything seems rather messy to me.
What are the basics one must figure out when using ADO.NET? I think a few key words will suffice to let me organize my learning.
| [
"There are three key components (assuming ur using SQL server):\n\nSQLConnection\nSqlCommand\nSqlDataReader\n\n(if you're using something else, replace Sql with \"Something\", like MySqlConnection, OracleCommand)\nEverything else is just built on top of that.\nExample 1:\nusing (SqlConnection connection = new SqlConnection(\"CONNECTION STRING\"))\nusing (SqlCommand command = new SqlCommand())\n{\n command.commandText = \"SELECT Name FROM Users WHERE Status = @OnlineStatus\";\n command.Connection = connection;\n command.Parameters.Add(\"@OnlineStatus\", SqlDbType.Int).Value = 1; //replace with enum\n connection.Open();\n\n using (SqlDataReader dr = command.ExecuteReader))\n {\n List<string> onlineUsers = new List<string>();\n\n while (dr.Read())\n {\n onlineUsers.Add(dr.GetString(0));\n }\n }\n}\n\nExample 2:\nusing (SqlConnection connection = new SqlConnection(\"CONNECTION STRING\"))\nusing (SqlCommand command = new SqlCommand())\n{\n command.commandText = \"DELETE FROM Users where Email = @Email\";\n command.Connection = connection;\n command.Parameters.Add(\"@Email\", SqlDbType.VarChar, 100).Value = \"user@host.com\";\n connection.Open();\n command.ExecuteNonQuery();\n}\n\n",
"Another way of getting a command object is to call connection.CreateCommand(). \nThat way you shouldn't have to set the Connection property on the command object. \n"
] | [
6,
0
] | [] | [] | [
"ado.net"
] | stackoverflow_0000004170_ado.net.txt |
Q:
Boost warnings with VC++ 9
When the Boost library/headers is used with VC++ 9 compilers (Visual C++ 2008 Express Edition or Visual Studio 2008), a lot of benign warnings are generated. They are of 2 kinds:
Warning about the Wp64 setting.
Warning about the compiler version.
How can I turn off these warnings?
A:
Warning about the Wp64 setting.
Turn off the /Wp64 setting which is set by default. You can find it in Project Properties -> C/C++ -> General.
Warning about the compiler version.
Go to the Boost trunk (online) and get the latest boost\boost\config\compiler\visualc.hpp header file. Diff it with the current file and merge the sections where _MSC_VER is equal to 1800. (1800 is the VC9 version number used in Boost configuration.)
| Boost warnings with VC++ 9 | When the Boost library/headers is used with VC++ 9 compilers (Visual C++ 2008 Express Edition or Visual Studio 2008), a lot of benign warnings are generated. They are of 2 kinds:
Warning about the Wp64 setting.
Warning about the compiler version.
How can I turn off these warnings?
| [
"\nWarning about the Wp64 setting.\n\nTurn off the /Wp64 setting which is set by default. You can find it in Project Properties -> C/C++ -> General.\n\nWarning about the compiler version.\n\nGo to the Boost trunk (online) and get the latest boost\\boost\\config\\compiler\\visualc.hpp header file. Diff it with the current file and merge the sections where _MSC_VER is equal to 1800. (1800 is the VC9 version number used in Boost configuration.)\n\n\n"
] | [
1
] | [] | [] | [
"boost",
"c++",
"visual_studio",
"warnings"
] | stackoverflow_0000014271_boost_c++_visual_studio_warnings.txt |
Q:
GL_FRAMEBUFFER_INCOMPLETE_DUPLICATE_ATTACHMENT_EXT errors
I'm using FBOs in my OpenGL code and I'm seeing compilation errors on GL\_FRAMEBUFFER\_INCOMPLETE\_DUPLICATE\_ATTACHMENT\_EXT. What's the cause of this and how do I fix it?
A:
The cause of this error is an older version of NVIDIA's glext.h, which still has this definition. Whereas the most recent versions of GLEW don't. This leads to compilation errors in code that you had written previously or got from the web.
The GL_FRAMEBUFFER_INCOMPLETE_DUPLICATE_ATTACHMENT_EXT definition for FBO used to be present in the specification (and hence in header files). But, it was later removed. The reason for this can be found in the FBO extension specification (look for Issue 87):
(87) What happens if a single image is attached more than once to a
framebuffer object?
RESOLVED: The value written to the pixel is undefined.
There used to be a rule in section 4.4.4.2 that resulted in
FRAMEBUFFER_INCOMPLETE_DUPLICATE_ATTACHMENT_EXT if a single
image was attached more than once to a framebuffer object.
FRAMEBUFFER_INCOMPLETE_DUPLICATE_ATTACHMENT_EXT 0x8CD8
* A single image is not attached more than once to the
framebuffer object.
{ FRAMEBUFFER_INCOMPLETE_DUPLICATE_ATTACHMENT_EXT }
This rule was removed in version #117 of the
EXT_framebuffer_object specification after discussion at the
September 2005 ARB meeting. The rule essentially required an
O(n*lg(n)) search. Some implementations would not need to do that
search if the completeness rules did not require it. Instead,
language was added to section 4.10 which says the values
written to the framebuffer are undefined when this rule is
violated.
To fix this error, remove all usage of GL_FRAMEBUFFER_INCOMPLETE_DUPLICATE_ATTACHMENT_EXT from your code.
If this isn't possible in your setup, then add a dummy definition to your glext.h or glew.h file like this:
#define GL_FRAMEBUFFER_INCOMPLETE_DUPLICATE_ATTACHMENT_EXT 0x8CD8
| GL_FRAMEBUFFER_INCOMPLETE_DUPLICATE_ATTACHMENT_EXT errors | I'm using FBOs in my OpenGL code and I'm seeing compilation errors on GL\_FRAMEBUFFER\_INCOMPLETE\_DUPLICATE\_ATTACHMENT\_EXT. What's the cause of this and how do I fix it?
| [
"The cause of this error is an older version of NVIDIA's glext.h, which still has this definition. Whereas the most recent versions of GLEW don't. This leads to compilation errors in code that you had written previously or got from the web.\nThe GL_FRAMEBUFFER_INCOMPLETE_DUPLICATE_ATTACHMENT_EXT definition for FBO used to be present in the specification (and hence in header files). But, it was later removed. The reason for this can be found in the FBO extension specification (look for Issue 87):\n(87) What happens if a single image is attached more than once to a\n framebuffer object?\n\n RESOLVED: The value written to the pixel is undefined.\n\n There used to be a rule in section 4.4.4.2 that resulted in\n FRAMEBUFFER_INCOMPLETE_DUPLICATE_ATTACHMENT_EXT if a single\n image was attached more than once to a framebuffer object.\n\n FRAMEBUFFER_INCOMPLETE_DUPLICATE_ATTACHMENT_EXT 0x8CD8\n\n * A single image is not attached more than once to the\n framebuffer object.\n\n { FRAMEBUFFER_INCOMPLETE_DUPLICATE_ATTACHMENT_EXT }\n\n This rule was removed in version #117 of the\n EXT_framebuffer_object specification after discussion at the\n September 2005 ARB meeting. The rule essentially required an\n O(n*lg(n)) search. Some implementations would not need to do that\n search if the completeness rules did not require it. Instead,\n language was added to section 4.10 which says the values\n written to the framebuffer are undefined when this rule is\n violated.\n\nTo fix this error, remove all usage of GL_FRAMEBUFFER_INCOMPLETE_DUPLICATE_ATTACHMENT_EXT from your code.\nIf this isn't possible in your setup, then add a dummy definition to your glext.h or glew.h file like this:\n#define GL_FRAMEBUFFER_INCOMPLETE_DUPLICATE_ATTACHMENT_EXT 0x8CD8\n\n"
] | [
4
] | [] | [] | [
"fbo",
"opengl"
] | stackoverflow_0000014297_fbo_opengl.txt |
Q:
Using OpenGL textures larger than window/display size
I'm having problems using textures that are larger than the OpenGL window or the display size as non-display render targets.
What's the solution for this problem?
A:
There's a simple solution.
Assuming your (non-display) textures are 1024x1024 and you are restricted to a 256x256 window/display.
unsigned int WIN_WIDTH = 256;
unsigned int WIN_HEIGHT = WIN_WIDTH;
unsigned int TEX_WIDTH = 1024;
unsigned int TEX_HEIGHT = TEX_WIDTH;
Use the window size to create your OpenGL window:
glutInitWindowSize(WIN_WIDTH, WIN_HEIGHT);
But, use the texture size for everything else:
glViewport(0, 0, TEX_WIDTH, TEX_HEIGHT);
gluOrtho2D(0.0, TEX_WIDTH, 0.0, TEX_HEIGHT);
glTexCoord2i(TEX_WIDTH, TEX_HEIGHT);
| Using OpenGL textures larger than window/display size | I'm having problems using textures that are larger than the OpenGL window or the display size as non-display render targets.
What's the solution for this problem?
| [
"There's a simple solution.\nAssuming your (non-display) textures are 1024x1024 and you are restricted to a 256x256 window/display.\nunsigned int WIN_WIDTH = 256;\nunsigned int WIN_HEIGHT = WIN_WIDTH;\nunsigned int TEX_WIDTH = 1024;\nunsigned int TEX_HEIGHT = TEX_WIDTH;\n\nUse the window size to create your OpenGL window:\nglutInitWindowSize(WIN_WIDTH, WIN_HEIGHT);\n\nBut, use the texture size for everything else:\nglViewport(0, 0, TEX_WIDTH, TEX_HEIGHT);\ngluOrtho2D(0.0, TEX_WIDTH, 0.0, TEX_HEIGHT);\nglTexCoord2i(TEX_WIDTH, TEX_HEIGHT);\n\n"
] | [
4
] | [] | [] | [
"opengl",
"textures"
] | stackoverflow_0000014310_opengl_textures.txt |
Q:
Using GLUT bitmap fonts
I'm writing a simple OpenGL application that uses GLUT. I don't want to roll my own font rendering code, instead I want to use the simple bitmap fonts that ship with GLUT. What are the steps to get them working?
A:
Simple text display is easy to do in OpenGL using GLUT bitmap fonts. These are simple 2D fonts and are not suitable for display inside your 3D environment. However, they're perfect for text that needs to be overlayed on the display window.
Here are the sample steps to display Eric Cartman's favorite quote colored in green on a GLUT window:
We'll be setting the raster position in screen coordinates. So, setup the projection and modelview matrices for 2D rendering:
glMatrixMode(GL_PROJECTION);
glPushMatrix();
glLoadIdentity();
gluOrtho2D(0.0, WIN_WIDTH, 0.0, WIN_HEIGHT);
glMatrixMode(GL_MODELVIEW);
glPushMatrix();
glLoadIdentity();
Set the font color. (Set this now, not later.)
glColor3f(0.0, 1.0, 0.0); // Green
Set the window location where the text should be displayed. This is done by setting the raster position in screen coordinates. Lower left corner of the window is (0, 0).
glRasterPos2i(10, 10);
Set the font and display the string characters using glutBitmapCharacter.
string s = "Respect mah authoritah!";
void * font = GLUT_BITMAP_9_BY_15;
for (string::iterator i = s.begin(); i != s.end(); ++i)
{
char c = *i;
glutBitmapCharacter(font, c);
}
Restore back the matrices.
glMatrixMode(GL_MODELVIEW);
glPopMatrix();
glMatrixMode(GL_PROJECTION);
glPopMatrix();
| Using GLUT bitmap fonts | I'm writing a simple OpenGL application that uses GLUT. I don't want to roll my own font rendering code, instead I want to use the simple bitmap fonts that ship with GLUT. What are the steps to get them working?
| [
"Simple text display is easy to do in OpenGL using GLUT bitmap fonts. These are simple 2D fonts and are not suitable for display inside your 3D environment. However, they're perfect for text that needs to be overlayed on the display window.\nHere are the sample steps to display Eric Cartman's favorite quote colored in green on a GLUT window:\nWe'll be setting the raster position in screen coordinates. So, setup the projection and modelview matrices for 2D rendering:\nglMatrixMode(GL_PROJECTION);\nglPushMatrix();\nglLoadIdentity();\ngluOrtho2D(0.0, WIN_WIDTH, 0.0, WIN_HEIGHT);\n\nglMatrixMode(GL_MODELVIEW);\nglPushMatrix();\nglLoadIdentity();\n\nSet the font color. (Set this now, not later.)\nglColor3f(0.0, 1.0, 0.0); // Green\n\nSet the window location where the text should be displayed. This is done by setting the raster position in screen coordinates. Lower left corner of the window is (0, 0).\nglRasterPos2i(10, 10);\n\nSet the font and display the string characters using glutBitmapCharacter.\nstring s = \"Respect mah authoritah!\";\nvoid * font = GLUT_BITMAP_9_BY_15;\nfor (string::iterator i = s.begin(); i != s.end(); ++i)\n{\n char c = *i;\n glutBitmapCharacter(font, c);\n}\n\nRestore back the matrices.\nglMatrixMode(GL_MODELVIEW);\nglPopMatrix();\n\nglMatrixMode(GL_PROJECTION);\nglPopMatrix();\n\n"
] | [
29
] | [] | [] | [
"bitmap",
"fonts",
"glut",
"opengl"
] | stackoverflow_0000014318_bitmap_fonts_glut_opengl.txt |
Q:
Getting started with a Picasa Plugin
Does anyone here know any resources on how to get started writing a plugin for Google's Picasa? I love it for photo management, but I have some ideas for how it could be better.
Riya-esque facial search: given a large enough corpus of faces and pictures (people tend to be repeated often in individuals' albums (family, friends), I would think some semi-workable version of this could be done. And with 13+ gigs/7 years of photos, it would be very nice for search.
Upload to Facebook EDIT: Someone already made a very nice version
Upload to any non-Google property, actually.
I know there are certain APIs and a Picasa2Flickr plugin out there, and I was wondering if anyone had seen any resources on this topic or had any experience
A:
there is a an Opensource Project which created a "Upload To FlickR" Plugin. Maybe you could use it as an startingpoint...
A:
I thought about facial recognition many years ago but my search only found a web API - no plugin api. My idea was to use an external facial recognition program to slowly index my entire catalogue of pictures and reliably tag them according to who was in them. It wouldn't need to be 100% accurate, but anything over 85% would be acceptable.
| Getting started with a Picasa Plugin | Does anyone here know any resources on how to get started writing a plugin for Google's Picasa? I love it for photo management, but I have some ideas for how it could be better.
Riya-esque facial search: given a large enough corpus of faces and pictures (people tend to be repeated often in individuals' albums (family, friends), I would think some semi-workable version of this could be done. And with 13+ gigs/7 years of photos, it would be very nice for search.
Upload to Facebook EDIT: Someone already made a very nice version
Upload to any non-Google property, actually.
I know there are certain APIs and a Picasa2Flickr plugin out there, and I was wondering if anyone had seen any resources on this topic or had any experience
| [
"there is a an Opensource Project which created a \"Upload To FlickR\" Plugin. Maybe you could use it as an startingpoint... \n",
"I thought about facial recognition many years ago but my search only found a web API - no plugin api. My idea was to use an external facial recognition program to slowly index my entire catalogue of pictures and reliably tag them according to who was in them. It wouldn't need to be 100% accurate, but anything over 85% would be acceptable.\n"
] | [
1,
1
] | [
"I would start with the Picasa API:\nPicasa API\n"
] | [
-2
] | [
"api",
"photo_management",
"picasa",
"plugins"
] | stackoverflow_0000013881_api_photo_management_picasa_plugins.txt |
Q:
Accessing audio/video metadata with .NET
What is the best way to get and set the meta data for mp3, mp4, avi files etc. with .NET?
A:
I use MediaInfo with my C# apps, gives you a lot of information about media files.
A:
These are the example class files for different languages found in the MediaInfo.dll zip
MediaInfoDLL.cs
MediaInfoDLL.def
MediaInfoDLL.h
MediaInfoDLL.java
MediaInfoDLL.jsl
MediaInfoDLL.pas
MediaInfoDLL.py
MediaInfoDLL.vb
MediaInfoDLL_Static.h
You do have to use interop and I don't know if you can edit tags, I've never needed to do that but it's pretty much a swiss army knife at least for getting media information from files.
Link to downloads page (sourceforge)
MediaInfo_0.7.7.4_DLL_Win32.zip
A:
You can use free UltraID3Lib .NET library to read/write MP3 metadata.
A:
I've been looking at the NTag project as well, which handles MP3/WMA/OGG. I don't know of a single library that handles audio and video files, so you might have to use a few.
A:
I've recently used Tag Lib Sharp to write some C# apps for cleaning up and maintaining my music library. I found the library very easy to use and although i've only used it for MP3's, it appears to support a range of other music/video formats.
A:
Looks like MediaInfo is read-only at this point, by the way: http://sourceforge.net/forum/message.php?msg_id=4241318&abmode=1
Very cool project, though. It's fun finding out about all this cool stuff here on SO.
A:
I used COM interop to access DirectShow's Media Detector functionality.
This does work pretty well, but it's a right pain in the backside. You need to know lots about COM, win32 interop, and so on.
You can also use DirectShowNet which should handle most of that for you, I just didn't want to lug that whole thing around when I was only interested in the MediaDetector part
| Accessing audio/video metadata with .NET | What is the best way to get and set the meta data for mp3, mp4, avi files etc. with .NET?
| [
"I use MediaInfo with my C# apps, gives you a lot of information about media files.\n",
"These are the example class files for different languages found in the MediaInfo.dll zip\n\nMediaInfoDLL.cs\nMediaInfoDLL.def\nMediaInfoDLL.h\nMediaInfoDLL.java\nMediaInfoDLL.jsl\nMediaInfoDLL.pas\nMediaInfoDLL.py\nMediaInfoDLL.vb\nMediaInfoDLL_Static.h\n\nYou do have to use interop and I don't know if you can edit tags, I've never needed to do that but it's pretty much a swiss army knife at least for getting media information from files.\nLink to downloads page (sourceforge)\nMediaInfo_0.7.7.4_DLL_Win32.zip\n",
"You can use free UltraID3Lib .NET library to read/write MP3 metadata.\n",
"I've been looking at the NTag project as well, which handles MP3/WMA/OGG. I don't know of a single library that handles audio and video files, so you might have to use a few.\n",
"I've recently used Tag Lib Sharp to write some C# apps for cleaning up and maintaining my music library. I found the library very easy to use and although i've only used it for MP3's, it appears to support a range of other music/video formats.\n",
"Looks like MediaInfo is read-only at this point, by the way: http://sourceforge.net/forum/message.php?msg_id=4241318&abmode=1\nVery cool project, though. It's fun finding out about all this cool stuff here on SO.\n",
"I used COM interop to access DirectShow's Media Detector functionality.\nThis does work pretty well, but it's a right pain in the backside. You need to know lots about COM, win32 interop, and so on.\nYou can also use DirectShowNet which should handle most of that for you, I just didn't want to lug that whole thing around when I was only interested in the MediaDetector part\n"
] | [
6,
4,
3,
3,
3,
1,
1
] | [] | [] | [
".net",
"audio",
"mp3",
"video"
] | stackoverflow_0000009091_.net_audio_mp3_video.txt |
Q:
Accessing OpenGL state variables in Cg
I need to access the OpenGL state variables (such as the MVP matrices) in my Cg shader program. I'm passing these values to my Cg shader program manually using calls such as cgGLSetStateMatrixParameter() in my C/C++ code. Is there an easier way to do this?
A:
If you are on any fairly recent Cg profile (arbvp1 and later), your Cg shader programs can in fact access the OpenGL state (MVP matrices, material and light settings) directly. This makes writing those programs less painful.
Here are some of the state variables which can be accessed:
MVP matrices of all types:
state.matrix.mvp
state.matrix.inverse.mvp
state.matrix.modelview
state.matrix.inverse.modelview
state.matrix.modelview.invtrans
state.matrix.projection
state.matrix.inverse.projection
Light and material properties:
state.material.ambient
state.material.diffuse
state.material.specular
state.light[0].ambient
For the full list of state variables, refer to the section Accessing OpenGL State, OpenGL ARB Vertex Program Profile (arbvp1) in the Cg Users Manual.
Note:
All the OpenGL state variables are of uniform type when accessed in Cg.
For light variables, the index is mandatory. (Eg: 1 in state.light[1].ambient)
Lighting or light(s) need not be enabled to use those corresponding light values inside Cg. But, they need to be set using glLight() functions.
| Accessing OpenGL state variables in Cg | I need to access the OpenGL state variables (such as the MVP matrices) in my Cg shader program. I'm passing these values to my Cg shader program manually using calls such as cgGLSetStateMatrixParameter() in my C/C++ code. Is there an easier way to do this?
| [
"If you are on any fairly recent Cg profile (arbvp1 and later), your Cg shader programs can in fact access the OpenGL state (MVP matrices, material and light settings) directly. This makes writing those programs less painful.\nHere are some of the state variables which can be accessed:\nMVP matrices of all types:\nstate.matrix.mvp\nstate.matrix.inverse.mvp\nstate.matrix.modelview\nstate.matrix.inverse.modelview\nstate.matrix.modelview.invtrans\nstate.matrix.projection\nstate.matrix.inverse.projection\n\nLight and material properties:\nstate.material.ambient\nstate.material.diffuse\nstate.material.specular\nstate.light[0].ambient\n\nFor the full list of state variables, refer to the section Accessing OpenGL State, OpenGL ARB Vertex Program Profile (arbvp1) in the Cg Users Manual.\nNote:\n\nAll the OpenGL state variables are of uniform type when accessed in Cg.\nFor light variables, the index is mandatory. (Eg: 1 in state.light[1].ambient)\nLighting or light(s) need not be enabled to use those corresponding light values inside Cg. But, they need to be set using glLight() functions.\n\n"
] | [
4
] | [] | [] | [
"cg",
"opengl",
"state",
"variables"
] | stackoverflow_0000014358_cg_opengl_state_variables.txt |
Q:
Valid OpenGL context
How and at what stage is a valid OpenGL context created in my code? I'm getting errors on even simple OpenGL code.
A:
From the posts on comp.graphics.api.opengl, it seems like most newbies burn their hands on their first OpenGL program. In most cases, the error is caused due to OpenGL functions being called even before a valid OpenGL context is created. OpenGL is a state machine. Only after the machine has been started and humming in the ready state, can it be put to work.
Here is some simple code to create a valid OpenGL context:
#include <stdlib.h>
#include <GL/glut.h>
// Window attributes
static const unsigned int WIN_POS_X = 30;
static const unsigned int WIN_POS_Y = WIN_POS_X;
static const unsigned int WIN_WIDTH = 512;
static const unsigned int WIN_HEIGHT = WIN_WIDTH;
void glInit(int, char **);
int main(int argc, char * argv[])
{
// Initialize OpenGL
glInit(argc, argv);
// A valid OpenGL context has been created.
// You can call OpenGL functions from here on.
glutMainLoop();
return 0;
}
void glInit(int argc, char ** argv)
{
// Initialize GLUT
glutInit(&argc, argv);
glutInitDisplayMode(GLUT_DOUBLE);
glutInitWindowPosition(WIN_POS_X, WIN_POS_Y);
glutInitWindowSize(WIN_WIDTH, WIN_HEIGHT);
glutCreateWindow("Hello OpenGL!");
return;
}
Note:
The call of interest here is glutCreateWindow(). It not only creates a window, but also creates an OpenGL context.
The window created with glutCreateWindow() is not visible until glutMainLoop() is called.
| Valid OpenGL context | How and at what stage is a valid OpenGL context created in my code? I'm getting errors on even simple OpenGL code.
| [
"From the posts on comp.graphics.api.opengl, it seems like most newbies burn their hands on their first OpenGL program. In most cases, the error is caused due to OpenGL functions being called even before a valid OpenGL context is created. OpenGL is a state machine. Only after the machine has been started and humming in the ready state, can it be put to work.\nHere is some simple code to create a valid OpenGL context:\n#include <stdlib.h>\n#include <GL/glut.h>\n\n// Window attributes\nstatic const unsigned int WIN_POS_X = 30;\nstatic const unsigned int WIN_POS_Y = WIN_POS_X;\nstatic const unsigned int WIN_WIDTH = 512;\nstatic const unsigned int WIN_HEIGHT = WIN_WIDTH;\n\nvoid glInit(int, char **);\n\nint main(int argc, char * argv[])\n{\n // Initialize OpenGL\n glInit(argc, argv);\n\n // A valid OpenGL context has been created.\n // You can call OpenGL functions from here on.\n\n glutMainLoop();\n\n return 0;\n}\n\nvoid glInit(int argc, char ** argv)\n{\n // Initialize GLUT\n glutInit(&argc, argv);\n glutInitDisplayMode(GLUT_DOUBLE);\n glutInitWindowPosition(WIN_POS_X, WIN_POS_Y);\n glutInitWindowSize(WIN_WIDTH, WIN_HEIGHT);\n glutCreateWindow(\"Hello OpenGL!\");\n\n return;\n}\n\nNote:\n\nThe call of interest here is glutCreateWindow(). It not only creates a window, but also creates an OpenGL context.\nThe window created with glutCreateWindow() is not visible until glutMainLoop() is called.\n\n"
] | [
4
] | [] | [] | [
"glut",
"opengl"
] | stackoverflow_0000014364_glut_opengl.txt |
Q:
What's the deal with |Pipe-delimited| variables in connection strings?
I know that |DataDirectory| will resolve to App_Data in an ASP.NET application but is that hard-coded or is there a generalized mechanism at work along the lines of %environment variables%?
A:
From the MSDN Smart Client Data Blog:
In this version, the .NET runtime
added support for what we call the
DataDirectory macro. This allows
Visual Studio to put a special
variable in the connection string that
will be expanded at run-time...
By default, the |DataDirectory|
variable will be expanded as follow:
For applications placed in a
directory on the user machine, this
will be the app's (.exe) folder.
For apps running under ClickOnce, this will be a special data folder
created by ClickOnce
For Web apps, this will be the App_Data folder
Under the hood, the value for
|DataDirectory| simply comes from a
property on the app domain. It is
possible to change that value and
override the default behavior by doing
this:
AppDomain.CurrentDomain.SetData("DataDirectory", newpath)
| What's the deal with |Pipe-delimited| variables in connection strings? | I know that |DataDirectory| will resolve to App_Data in an ASP.NET application but is that hard-coded or is there a generalized mechanism at work along the lines of %environment variables%?
| [
"From the MSDN Smart Client Data Blog:\n\nIn this version, the .NET runtime\n added support for what we call the\n DataDirectory macro. This allows\n Visual Studio to put a special\n variable in the connection string that\n will be expanded at run-time...\nBy default, the |DataDirectory|\n variable will be expanded as follow:\n\nFor applications placed in a\n directory on the user machine, this\n will be the app's (.exe) folder.\nFor apps running under ClickOnce, this will be a special data folder\n created by ClickOnce\nFor Web apps, this will be the App_Data folder\n\nUnder the hood, the value for\n |DataDirectory| simply comes from a\n property on the app domain. It is\n possible to change that value and\n override the default behavior by doing\n this:\nAppDomain.CurrentDomain.SetData(\"DataDirectory\", newpath)\n\n\n"
] | [
5
] | [] | [] | [
".net",
"ado.net",
"connection_string",
"macros",
"syntax"
] | stackoverflow_0000014241_.net_ado.net_connection_string_macros_syntax.txt |
Q:
GLUT pop-up menus
Is it easy to create GLUT pop-up menus for my OpenGL application? If yes, how?
A:
Creating and using pop-up menus with GLUT is very simple. Here is a code sample that creates a pop-up menu with 4 options:
// Menu items
enum MENU_TYPE
{
MENU_FRONT,
MENU_SPOT,
MENU_BACK,
MENU_BACK_FRONT,
};
// Assign a default value
MENU_TYPE show = MENU_BACK_FRONT;
// Menu handling function declaration
void menu(int);
int main()
{
// ...
// Create a menu
glutCreateMenu(menu);
// Add menu items
glutAddMenuEntry("Show Front", MENU_FRONT);
glutAddMenuEntry("Show Back", MENU_BACK);
glutAddMenuEntry("Spotlight", MENU_SPOT);
glutAddMenuEntry("Blend 'em all", MENU_BACK_FRONT);
// Associate a mouse button with menu
glutAttachMenu(GLUT_RIGHT_BUTTON);
// ...
return;
}
// Menu handling function definition
void menu(int item)
{
switch (item)
{
case MENU_FRONT:
case MENU_SPOT:
case MENU_DEPTH:
case MENU_BACK:
case MENU_BACK_FRONT:
{
show = (MENU_TYPE) item;
}
break;
default:
{ /* Nothing */ }
break;
}
glutPostRedisplay();
return;
}
| GLUT pop-up menus | Is it easy to create GLUT pop-up menus for my OpenGL application? If yes, how?
| [
"Creating and using pop-up menus with GLUT is very simple. Here is a code sample that creates a pop-up menu with 4 options:\n// Menu items\nenum MENU_TYPE\n{\n MENU_FRONT,\n MENU_SPOT,\n MENU_BACK,\n MENU_BACK_FRONT,\n};\n\n// Assign a default value\nMENU_TYPE show = MENU_BACK_FRONT;\n\n// Menu handling function declaration\nvoid menu(int);\n\nint main()\n{\n // ...\n\n // Create a menu\n glutCreateMenu(menu);\n\n // Add menu items\n glutAddMenuEntry(\"Show Front\", MENU_FRONT);\n glutAddMenuEntry(\"Show Back\", MENU_BACK);\n glutAddMenuEntry(\"Spotlight\", MENU_SPOT);\n glutAddMenuEntry(\"Blend 'em all\", MENU_BACK_FRONT);\n\n // Associate a mouse button with menu\n glutAttachMenu(GLUT_RIGHT_BUTTON);\n\n // ...\n\n return;\n}\n\n// Menu handling function definition\nvoid menu(int item)\n{\n switch (item)\n {\n case MENU_FRONT:\n case MENU_SPOT:\n case MENU_DEPTH:\n case MENU_BACK:\n case MENU_BACK_FRONT:\n {\n show = (MENU_TYPE) item;\n }\n break;\n default:\n { /* Nothing */ }\n break;\n }\n\n glutPostRedisplay();\n\n return;\n}\n\n"
] | [
13
] | [] | [] | [
"glut",
"menu",
"opengl"
] | stackoverflow_0000014370_glut_menu_opengl.txt |
Q:
How do I call a Flex SWF from a remote domain using Flash (AS3)?
I have a Flex swf hosted at http://www.a.com/a.swf.
I have a flash code on another doamin that tries loading the SWF:
_loader = new Loader();
var req:URLRequest = new URLRequest("http://services.nuconomy.com/n.swf");
_loader.contentLoaderInfo.addEventListener(Event.COMPLETE,onLoaderFinish);
_loader.load(req);
On the onLoaderFinish event I try to load classes from the remote SWF and create them:
_loader.contentLoaderInfo.applicationDomain.getDefinition("someClassName") as Class
When this code runs I get the following exception
SecurityError: Error #2119: Security sandbox violation: caller http://localhost.service:1234/flashTest/Main.swf cannot access LoaderInfo.applicationDomain owned by http://www.b.com/b.swf.
at flash.display::LoaderInfo/get applicationDomain()
at NuconomyLoader/onLoaderFinish()
Is there any way to get this code working?
A:
This is all described in The Adobe Flex 3 Programming ActionScript 3 PDF on page 550 (Chapter 27: Flash Player Security / Cross-scripting):
If two SWF files written with ActionScript 3.0 are served from different domains—for example, http://siteA.com/swfA.swf and http://siteB.com/swfB.swf—then, by default, Flash Player does not allow swfA.swf to script swfB.swf, nor swfB.swf to script swfA.swf. A SWF file gives permission to SWF files from other domains by calling Security.allowDomain(). By calling Security.allowDomain("siteA.com"), swfB.swf gives SWF files from siteA.com permission to script it.
It goes on in some more detail, with diagrams and all.
A:
You'll need a crossdomain.xml policy file on the server that has the file you load, it should look a something like this:
<?xml version="1.0"?>
<!-- http://www.foo.com/crossdomain.xml -->
<cross-domain-policy>
<allow-access-from domain="www.friendOfFoo.com" />
<allow-access-from domain="*.foo.com" />
<allow-access-from domain="105.216.0.40" />
</cross-domain-policy>
Put it as crossdomain.xml in the root of the domain you're loading from.
Also you need to set the loader to read this file as such:
var loaderContext:LoaderContext = new LoaderContext();
loaderContext.checkPolicyFile = true;
var loader:Loader = new Loader();
loader.contentLoaderInfo.addEventListener( Event.COMPLETE, onComplete );
loader.load( new URLRequest( "http://my.domain.com/image.png" ), loaderContext );
code sample yoinked from http://blog.log2e.com/2008/08/15/when-a-cross-domain-policy-file-is-not-enough/
A:
Mayhaps System.Security.allowDomain is what you need?
| How do I call a Flex SWF from a remote domain using Flash (AS3)? | I have a Flex swf hosted at http://www.a.com/a.swf.
I have a flash code on another doamin that tries loading the SWF:
_loader = new Loader();
var req:URLRequest = new URLRequest("http://services.nuconomy.com/n.swf");
_loader.contentLoaderInfo.addEventListener(Event.COMPLETE,onLoaderFinish);
_loader.load(req);
On the onLoaderFinish event I try to load classes from the remote SWF and create them:
_loader.contentLoaderInfo.applicationDomain.getDefinition("someClassName") as Class
When this code runs I get the following exception
SecurityError: Error #2119: Security sandbox violation: caller http://localhost.service:1234/flashTest/Main.swf cannot access LoaderInfo.applicationDomain owned by http://www.b.com/b.swf.
at flash.display::LoaderInfo/get applicationDomain()
at NuconomyLoader/onLoaderFinish()
Is there any way to get this code working?
| [
"This is all described in The Adobe Flex 3 Programming ActionScript 3 PDF on page 550 (Chapter 27: Flash Player Security / Cross-scripting):\n\nIf two SWF files written with ActionScript 3.0 are served from different domains—for example, http://siteA.com/swfA.swf and http://siteB.com/swfB.swf—then, by default, Flash Player does not allow swfA.swf to script swfB.swf, nor swfB.swf to script swfA.swf. A SWF file gives permission to SWF files from other domains by calling Security.allowDomain(). By calling Security.allowDomain(\"siteA.com\"), swfB.swf gives SWF files from siteA.com permission to script it.\n\nIt goes on in some more detail, with diagrams and all.\n",
"You'll need a crossdomain.xml policy file on the server that has the file you load, it should look a something like this:\n<?xml version=\"1.0\"?>\n<!-- http://www.foo.com/crossdomain.xml -->\n<cross-domain-policy>\n <allow-access-from domain=\"www.friendOfFoo.com\" />\n <allow-access-from domain=\"*.foo.com\" />\n <allow-access-from domain=\"105.216.0.40\" />\n</cross-domain-policy>\n\nPut it as crossdomain.xml in the root of the domain you're loading from.\nAlso you need to set the loader to read this file as such:\nvar loaderContext:LoaderContext = new LoaderContext();\nloaderContext.checkPolicyFile = true;\n\nvar loader:Loader = new Loader();\nloader.contentLoaderInfo.addEventListener( Event.COMPLETE, onComplete );\nloader.load( new URLRequest( \"http://my.domain.com/image.png\" ), loaderContext );\n\ncode sample yoinked from http://blog.log2e.com/2008/08/15/when-a-cross-domain-policy-file-is-not-enough/\n",
"Mayhaps System.Security.allowDomain is what you need?\n"
] | [
6,
2,
0
] | [] | [] | [
"actionscript_3",
"apache_flex",
"flash",
"security"
] | stackoverflow_0000014350_actionscript_3_apache_flex_flash_security.txt |
Q:
In ASP.NET MVC I encounter an incorrect type error when rendering a user control with the correct typed object
I encounter an error of the form: "The model item passed into the dictionary is of type FooViewData but this dictionary requires a model item of type bar" even though I am passing in an object of the correct type (bar) for the typed user control.
A:
What @MattMitchell said is probably the reason you're seeing this error.
If you want to know why; it is because when you pass null as the controlData parameter when using RenderUserControl(), the framework will try to pass the view data from the current view context onto the user control instead (see UserControlExtensions.DoRendering method in System.Web.Mvc).
A:
What has probably happened is that the object provided when rendering the user control is actually null.
| In ASP.NET MVC I encounter an incorrect type error when rendering a user control with the correct typed object | I encounter an error of the form: "The model item passed into the dictionary is of type FooViewData but this dictionary requires a model item of type bar" even though I am passing in an object of the correct type (bar) for the typed user control.
| [
"What @MattMitchell said is probably the reason you're seeing this error.\nIf you want to know why; it is because when you pass null as the controlData parameter when using RenderUserControl(), the framework will try to pass the view data from the current view context onto the user control instead (see UserControlExtensions.DoRendering method in System.Web.Mvc).\n",
"What has probably happened is that the object provided when rendering the user control is actually null.\n"
] | [
4,
1
] | [] | [] | [
"asp.net_mvc"
] | stackoverflow_0000005544_asp.net_mvc.txt |
Q:
File Uploads via Web Services
Is it possible to upload a file from a client's computer to the server through a web service? The client can be running anything from a native desktop app to a thin ajax client.
A:
It's certainly possible to send binary files via web services (eg. SOAP), but you usually have to do some kind of encoding such as base64, which increases the amount of data to send. One of the most efficient ways to send an arbitrary binary file is via an HTTP PUT operation, since there is no encoding overhead. Not all clients necessarily have an easy way to do this, but it's worth looking.
The other side of that coin is how to get the data off the user's disk an on to the network connection. A "thin ajax client" might not have the requisite permissions to read files from the user's disk. On the other hand, a desktop app implementation would be able to do so without any problem.
A:
I'm not a master in "webservice", but if you develop the webservice (and the client), you always can convert the binary file to BASE64 in the client (can do in java... and i soupose in ajax too) and transfer as "string", in the other side, in the webservice encode to binary from BASE64...
It's one idea, that's work, but maybe not "correct" in all enviroment.
| File Uploads via Web Services | Is it possible to upload a file from a client's computer to the server through a web service? The client can be running anything from a native desktop app to a thin ajax client.
| [
"It's certainly possible to send binary files via web services (eg. SOAP), but you usually have to do some kind of encoding such as base64, which increases the amount of data to send. One of the most efficient ways to send an arbitrary binary file is via an HTTP PUT operation, since there is no encoding overhead. Not all clients necessarily have an easy way to do this, but it's worth looking.\nThe other side of that coin is how to get the data off the user's disk an on to the network connection. A \"thin ajax client\" might not have the requisite permissions to read files from the user's disk. On the other hand, a desktop app implementation would be able to do so without any problem.\n",
"I'm not a master in \"webservice\", but if you develop the webservice (and the client), you always can convert the binary file to BASE64 in the client (can do in java... and i soupose in ajax too) and transfer as \"string\", in the other side, in the webservice encode to binary from BASE64...\nIt's one idea, that's work, but maybe not \"correct\" in all enviroment.\n"
] | [
1,
0
] | [] | [] | [
".net",
"file_upload",
"upload",
"web_services"
] | stackoverflow_0000011782_.net_file_upload_upload_web_services.txt |
Q:
Is there any trick that allows to use Management Studio's (ver. 2008) IntelliSense feature with earlier versions of SQL Server?
New version of Management Studio (i.e. the one that ships with SQL Server 2008) finally has a Transact-SQL IntelliSense feature. However, out-of-the-box it only works with SQL Server 2008 instances.
Is there some workaround for this?
A:
There's no known trick 'in the wild' for getting around this, other than using CTP-6 of SQL Server 2008 (in favour of the RTM).
the reasons for removing backward compatability (and a lot more discussion besides) are provided at the relevant feedback in microsoft connect.
edit: sorry i don't know where this ctp is available, if at all
A:
Has anyone tried either patching SSMS not to check the version (perhaps try looking at the binary differences between CTP 6 and RTM?), or patching SS 2005 to pretend to be 2008?
Unclean, I know, but I don't see any other way.
| Is there any trick that allows to use Management Studio's (ver. 2008) IntelliSense feature with earlier versions of SQL Server? | New version of Management Studio (i.e. the one that ships with SQL Server 2008) finally has a Transact-SQL IntelliSense feature. However, out-of-the-box it only works with SQL Server 2008 instances.
Is there some workaround for this?
| [
"There's no known trick 'in the wild' for getting around this, other than using CTP-6 of SQL Server 2008 (in favour of the RTM).\nthe reasons for removing backward compatability (and a lot more discussion besides) are provided at the relevant feedback in microsoft connect.\nedit: sorry i don't know where this ctp is available, if at all\n",
"Has anyone tried either patching SSMS not to check the version (perhaps try looking at the binary differences between CTP 6 and RTM?), or patching SS 2005 to pretend to be 2008?\nUnclean, I know, but I don't see any other way.\n"
] | [
8,
1
] | [] | [] | [
"database",
"intellisense",
"sql",
"sql_server",
"sql_server_2008"
] | stackoverflow_0000004511_database_intellisense_sql_sql_server_sql_server_2008.txt |
Q:
How do I access the Ruby AST from C level code?
I understand that the Ruby 1.8 AST is traversed at runtime using a big switch statement, and many things like calling a method in a class or parent module involve the interpreter looking up and down the tree as it goes. Is there a straightforward way of accessing this AST in a Ruby C extension? Does it involve the Ruby extension API, or necessitate hacking the internal data structures directly?
A:
A good starting point is probably to read the source of the ParseTree library, which lets you get at and mess with the AST from ruby.
A:
Thanks for the tip. You're right - ParseTree seems to be the only code out there with any manipulation of the AST going on, except that it's actually written in RubyInline.
So, it's a strange mixture between Ruby and C code. Very interesting reading, though.
The other reference of course is eval.c from Ruby itself.
It's going to take a fair bit of reading of both, to get my head around it.
| How do I access the Ruby AST from C level code? | I understand that the Ruby 1.8 AST is traversed at runtime using a big switch statement, and many things like calling a method in a class or parent module involve the interpreter looking up and down the tree as it goes. Is there a straightforward way of accessing this AST in a Ruby C extension? Does it involve the Ruby extension API, or necessitate hacking the internal data structures directly?
| [
"A good starting point is probably to read the source of the ParseTree library, which lets you get at and mess with the AST from ruby.\n",
"Thanks for the tip. You're right - ParseTree seems to be the only code out there with any manipulation of the AST going on, except that it's actually written in RubyInline. \nSo, it's a strange mixture between Ruby and C code. Very interesting reading, though.\nThe other reference of course is eval.c from Ruby itself.\nIt's going to take a fair bit of reading of both, to get my head around it.\n"
] | [
1,
0
] | [] | [] | [
"c",
"interpreter",
"ruby",
"tree"
] | stackoverflow_0000013981_c_interpreter_ruby_tree.txt |
Q:
Software to use when designing classes
What software do you use when designing classes and their relationship, or just pen and paper?
A:
I find pen and paper very useful, and I try to get as far away from a computer as possible. If I do it on the compy, I'm always too tempted to start programming the solution. That inevitably leads to me changing things later that I would have spotted in the planning phase had I actually spent a good measure of time on it.
A:
I usually start with a empty interface then start writing tests. I then generate the members using refactoring tools. For me unit testing is part of the design.
A:
OmniGraffle (Visio-esque app for Mac OS X), sometimes. Otherwise, just pen and paper will do.
A:
It's easy, while in the paper-and-pen (or whatever non-code equivalent you prefer) stage, to overstay, falling prey to the dreaded YAGNI syndrome. How many of us have carefully designed in some "sexy" feature that ended up never being used? (Raises hand. Hands.)
Small iterative test-driven steps and frequent refactoring - let the code tell you what it wants to be.
Most of my projects start out with the only certainty being that we won't end up where we currently think we will. So spending very much time on Big Up-Front Design (or Big Design Up Front if you prefer) is wasteful - better to start with the first thing we want to do and see where we end up.
It kind of depends on where you consider design to end. I read an article a few years back that presented the idea that coding is design - or for the Big Process fans at least it's the back-end of design. It rang true to me and changed forever the way I viewed the stages of the development process. Of course, I've just googled like crazy for the darn thing. Could I find it? Could I heck. Perhaps I dreamed the article and it's all my own idea. Yeah, that'll be it.
A:
Pen and paper for the first draft. Umlet to digitalize it. It's very minimal but it does what I need
A:
I use pen and paper.
For all planning purposes, it's the fastest way.
I get lost in layout and finetuning when I use a UML package.
But that is my burden.. :-)
A:
Go for PENCIL and paper, or a whiteboard. Anything permenant-marking like a pen and you'll have a pretty messy design!
A:
Whiteboard for the first 35 or 40 drafts. UML is nice after that, but not particularly necessary. The best documentation after you've hashed out the details is clean code.
A:
Mostly pen and paper, although I occasionally break out Visio and just do some rough diagrams.
Would be nice to have a fancy tool I guess, but it would just be another thing to learn.
A:
When doing an initial design I like a whiteboard and 1 - 3 other developers to bounce ideas off of. That's usually enough to catch any glaring errors/fix any tricky situations that may arise without dropping the s/n ratio by too much.
A:
I find pen and paper, a whiteboard and possibly some CRC cards to be very useful. Most of the time I think a whiteboard and some stickers or cards with the class and/or module names written on them work best when doing planning and designing as a group. Pen and paper is fine if you are doing the activity alone. Once you have the basic structure set you can always make a pretty UML diagram.
A:
Pen and Paper and/or Whiteboards for drafting, a more comprehensive tool for documentation purposes.
I mainly use Class Diagrams and a few sketches with Sequence Diagrams to get most of the relationships right.
About the tools: At work I use Enterprise Architect but personally I find Visual Paradigm for UML a better choice. The latter is much more flexible and allows quick drafting as well.
At VP they also have a version called Agilian for some time now (which I have not yet used) which seems to be even more flexible, allowing sketches to become documentation in no-time... maybe one day this tool will replace my paper sketches (save the trees :P).
| Software to use when designing classes | What software do you use when designing classes and their relationship, or just pen and paper?
| [
"I find pen and paper very useful, and I try to get as far away from a computer as possible. If I do it on the compy, I'm always too tempted to start programming the solution. That inevitably leads to me changing things later that I would have spotted in the planning phase had I actually spent a good measure of time on it.\n",
"I usually start with a empty interface then start writing tests. I then generate the members using refactoring tools. For me unit testing is part of the design.\n",
"OmniGraffle (Visio-esque app for Mac OS X), sometimes. Otherwise, just pen and paper will do.\n",
"It's easy, while in the paper-and-pen (or whatever non-code equivalent you prefer) stage, to overstay, falling prey to the dreaded YAGNI syndrome. How many of us have carefully designed in some \"sexy\" feature that ended up never being used? (Raises hand. Hands.)\nSmall iterative test-driven steps and frequent refactoring - let the code tell you what it wants to be.\nMost of my projects start out with the only certainty being that we won't end up where we currently think we will. So spending very much time on Big Up-Front Design (or Big Design Up Front if you prefer) is wasteful - better to start with the first thing we want to do and see where we end up.\nIt kind of depends on where you consider design to end. I read an article a few years back that presented the idea that coding is design - or for the Big Process fans at least it's the back-end of design. It rang true to me and changed forever the way I viewed the stages of the development process. Of course, I've just googled like crazy for the darn thing. Could I find it? Could I heck. Perhaps I dreamed the article and it's all my own idea. Yeah, that'll be it.\n",
"Pen and paper for the first draft. Umlet to digitalize it. It's very minimal but it does what I need\n",
"I use pen and paper. \nFor all planning purposes, it's the fastest way.\nI get lost in layout and finetuning when I use a UML package.\nBut that is my burden.. :-)\n",
"Go for PENCIL and paper, or a whiteboard. Anything permenant-marking like a pen and you'll have a pretty messy design!\n",
"Whiteboard for the first 35 or 40 drafts. UML is nice after that, but not particularly necessary. The best documentation after you've hashed out the details is clean code.\n",
"Mostly pen and paper, although I occasionally break out Visio and just do some rough diagrams.\nWould be nice to have a fancy tool I guess, but it would just be another thing to learn.\n",
"When doing an initial design I like a whiteboard and 1 - 3 other developers to bounce ideas off of. That's usually enough to catch any glaring errors/fix any tricky situations that may arise without dropping the s/n ratio by too much.\n",
"I find pen and paper, a whiteboard and possibly some CRC cards to be very useful. Most of the time I think a whiteboard and some stickers or cards with the class and/or module names written on them work best when doing planning and designing as a group. Pen and paper is fine if you are doing the activity alone. Once you have the basic structure set you can always make a pretty UML diagram.\n",
"Pen and Paper and/or Whiteboards for drafting, a more comprehensive tool for documentation purposes.\nI mainly use Class Diagrams and a few sketches with Sequence Diagrams to get most of the relationships right.\nAbout the tools: At work I use Enterprise Architect but personally I find Visual Paradigm for UML a better choice. The latter is much more flexible and allows quick drafting as well.\nAt VP they also have a version called Agilian for some time now (which I have not yet used) which seems to be even more flexible, allowing sketches to become documentation in no-time... maybe one day this tool will replace my paper sketches (save the trees :P).\n"
] | [
5,
2,
2,
2,
1,
1,
1,
1,
0,
0,
0,
0
] | [] | [] | [
"class_design",
"editor"
] | stackoverflow_0000014435_class_design_editor.txt |
Q:
How to organize dataset queries to improve performance
I don't know when to add to a dataset a tableadapter or a query from toolbox. Does it make any difference?
I also dont know where to create instances of the adapters.
Should I do it in the Page_Load?
Should I just do it when I'm going to use it?
Am I opening a new connection when I create a new instance?
This doesn't seem very important, but every time I create a query a little voice on my brain asks me these questions.
A:
Should I just do it when I'm going to use it?
I would recommend that you only retrieve the data when you are going to use it. If you are not going to need it, there is no reason to waste resources by retrieving it in Page_Load. If you are going to need it multiple times throughout the page load, consider saving the query results to a private variable or collection so that the same data can be reused multiple times throughout the page load.
Am I opening a new connection when I create a new instance?
Asp.net handles connection pooling, and opens and closes connections in an efficient way. You shouldn't have to worry about this.
One other thing to consider from a performance perspective is to avoid using Datasets and TableAdapters. In many cases, they add extra overhead into data retrieval that does not exist when using Linq to Sql, Stored Procedures or DataReaders.
| How to organize dataset queries to improve performance | I don't know when to add to a dataset a tableadapter or a query from toolbox. Does it make any difference?
I also dont know where to create instances of the adapters.
Should I do it in the Page_Load?
Should I just do it when I'm going to use it?
Am I opening a new connection when I create a new instance?
This doesn't seem very important, but every time I create a query a little voice on my brain asks me these questions.
| [
"\nShould I just do it when I'm going to use it?\n\nI would recommend that you only retrieve the data when you are going to use it. If you are not going to need it, there is no reason to waste resources by retrieving it in Page_Load. If you are going to need it multiple times throughout the page load, consider saving the query results to a private variable or collection so that the same data can be reused multiple times throughout the page load.\n\nAm I opening a new connection when I create a new instance?\n\nAsp.net handles connection pooling, and opens and closes connections in an efficient way. You shouldn't have to worry about this.\nOne other thing to consider from a performance perspective is to avoid using Datasets and TableAdapters. In many cases, they add extra overhead into data retrieval that does not exist when using Linq to Sql, Stored Procedures or DataReaders.\n"
] | [
2
] | [] | [] | [
"asp.net",
"database",
"performance"
] | stackoverflow_0000014570_asp.net_database_performance.txt |
Q:
How to register COM from VS Setup project?
I have marked my DLL as vsdraCOM, and I can see it in the registry after installing, but my application does not see the COM interface until I call RegAsm on it manually. Why could this be?
The COM registration does not work on Vista (confirmed myself) and on XP (confirmed by a colleague).
Using Visual Studio 2005 on XP.
A:
Well, I have found a solution:
Run RegAsm.exe with the /regfile option to generate the registry entries.
Manually import the .reg file into the VS Setup project by viewing the registry, right clicking, and choosing "Import..."
| How to register COM from VS Setup project? | I have marked my DLL as vsdraCOM, and I can see it in the registry after installing, but my application does not see the COM interface until I call RegAsm on it manually. Why could this be?
The COM registration does not work on Vista (confirmed myself) and on XP (confirmed by a colleague).
Using Visual Studio 2005 on XP.
| [
"Well, I have found a solution:\n\nRun RegAsm.exe with the /regfile option to generate the registry entries.\nManually import the .reg file into the VS Setup project by viewing the registry, right clicking, and choosing \"Import...\"\n\n"
] | [
5
] | [] | [] | [
"com",
"registry"
] | stackoverflow_0000012807_com_registry.txt |
Q:
Default database IDs; system and user values
As part of our current database work, we are looking at a dealing with the process of updating databases.
A point which has been brought up recurrently, is that of dealing with system vs. user values; in our project user and system vals are stored together. For example...
We have a list of templates.
1, <system template>
2, <system template>
3, <system template>
These are mapped in the app to an enum (1, 2, 3)
Then a user comes in and adds...
4, <user template>
...and...
5, <user template>
Then.. we issue an upgrade.. and insert as part of our upgrade scripts...
<new id> [6], <new system template>
THEN!!... we find a bug in the new system template and need to update it... The problem is how? We cannot update record using ID6 (as we may have inserted it as 9, or 999, so we have to identify the record using some other mechanism)
So, we've come to two possible solutions for this.
In the red corner (speed)....
We simply start user Ids at 5000 (or some other value) and test data at 10000 (or some other value). This would allow us to make modifications to system values and test them up to the lower limit of the next ID range.
Advantage...Quick and easy to implement,
Disadvantage... could run out of values if we don't choose a big enough range!
In the blue corner (scalability)...
We store, system and user data separately, use GUIDs as Ids and merge the two lists using a view.
Advantage...Scalable..No limits w/regard to DB size.
Disadvantage.. More complicated to implement. (many to one updatable views etc.)
I plump squarely for the first option, but looking for some ammo to back me up!
Does anyone have any thoughts on these approaches, or even one(s) that we've missed?
A:
I recommend using the second with the modification that you store the system and user values in one table. GUID is quite reliable in this manner.
Another idea: use any text-based ID (not necessary GUID), which you give for the system values and is generated by a random string or a string based on some kind of custom logic for the user values.
Another idea: use the first approach, but extend the table with a flag which shows if a value is system or user. Maybe this is the easiest. Ok, you have to write some kind of mechanism to update the correct system value, but it can be done easily.
A:
+1 for Biri's text based ID - define a "template_mnemonic" text based column and make it the primary key. This will be a known value when you insert it as you, the developers will have decided on it (or auto-generated it) and you will always be able to reference a template by its mnemonic regardless of how many user specified templates there are. It also allows users to have a meaningful naming convention for their templates.
A:
I have never had problems (performance or development - TDD & unit testing included) using GUIDs as the ID for my databases, and I've worked on some pretty big ones. Have a look here, here and here if you want to find out more about using GUIDs (and the potential GOTCHAS involved) as your primary keys - but I can't recommend it highly enough since moving data around safely and DB synchronisation becomes as easy as brushing your teeth in the morning :-)
For your question above, I would either recommend a third column (if possible) that indicates whether or not the template is user or system based, or you can at the very least generate GUIDs for system templates as you insert them and keep a list of those on hand, so that if you need to update the template, you can just target that same GUID in your DEV, UAT and /or PRODUCTION databases without fear of overwriting other templates. The third column would come in handy though for selecting all system or user templates at will, without the need to seperate them into two tables (this is overkill IMHO).
I hope that helps,
Rob G
A:
Maybe I didn't get it, but couldn't you use GUIDs as Ids and still have user and system data together? Then you can access the system data by the (non-changable) GUIDs.
A:
I don't think that GUID should make any problem.
If you want to avoid it, then use a flag:
ID int
template whatever
flag enum/int/bool
Flag shows whether the actual value is a system or a user value.
If you would like to update a system value, then ask only for system values ordered by ID, and it will show you actual order of insertion (you should have a bigint or something for ID to make sure that it doesn't get full and it doesn't get the deleted IDs back to work). With this list the x. record is the x. inserted system value.
A:
I think there is a better third solution.
It strikes me that you're storing two different things in the same table and that you might be better off creating 2 separate tables one for user templates and one for system templates. You might then be able to create a view over the two tables to make them appear as a single object to your application.
Obviously I don't have full knowledge of your application and this may be impossible for you for any number of reasons but I think it's a neater solution than GUIDs and way safer than ranges of IDs (seriously don't do ID ranges it'll bite you one day)
| Default database IDs; system and user values | As part of our current database work, we are looking at a dealing with the process of updating databases.
A point which has been brought up recurrently, is that of dealing with system vs. user values; in our project user and system vals are stored together. For example...
We have a list of templates.
1, <system template>
2, <system template>
3, <system template>
These are mapped in the app to an enum (1, 2, 3)
Then a user comes in and adds...
4, <user template>
...and...
5, <user template>
Then.. we issue an upgrade.. and insert as part of our upgrade scripts...
<new id> [6], <new system template>
THEN!!... we find a bug in the new system template and need to update it... The problem is how? We cannot update record using ID6 (as we may have inserted it as 9, or 999, so we have to identify the record using some other mechanism)
So, we've come to two possible solutions for this.
In the red corner (speed)....
We simply start user Ids at 5000 (or some other value) and test data at 10000 (or some other value). This would allow us to make modifications to system values and test them up to the lower limit of the next ID range.
Advantage...Quick and easy to implement,
Disadvantage... could run out of values if we don't choose a big enough range!
In the blue corner (scalability)...
We store, system and user data separately, use GUIDs as Ids and merge the two lists using a view.
Advantage...Scalable..No limits w/regard to DB size.
Disadvantage.. More complicated to implement. (many to one updatable views etc.)
I plump squarely for the first option, but looking for some ammo to back me up!
Does anyone have any thoughts on these approaches, or even one(s) that we've missed?
| [
"I recommend using the second with the modification that you store the system and user values in one table. GUID is quite reliable in this manner.\nAnother idea: use any text-based ID (not necessary GUID), which you give for the system values and is generated by a random string or a string based on some kind of custom logic for the user values.\nAnother idea: use the first approach, but extend the table with a flag which shows if a value is system or user. Maybe this is the easiest. Ok, you have to write some kind of mechanism to update the correct system value, but it can be done easily.\n",
"+1 for Biri's text based ID - define a \"template_mnemonic\" text based column and make it the primary key. This will be a known value when you insert it as you, the developers will have decided on it (or auto-generated it) and you will always be able to reference a template by its mnemonic regardless of how many user specified templates there are. It also allows users to have a meaningful naming convention for their templates.\n",
"I have never had problems (performance or development - TDD & unit testing included) using GUIDs as the ID for my databases, and I've worked on some pretty big ones. Have a look here, here and here if you want to find out more about using GUIDs (and the potential GOTCHAS involved) as your primary keys - but I can't recommend it highly enough since moving data around safely and DB synchronisation becomes as easy as brushing your teeth in the morning :-)\nFor your question above, I would either recommend a third column (if possible) that indicates whether or not the template is user or system based, or you can at the very least generate GUIDs for system templates as you insert them and keep a list of those on hand, so that if you need to update the template, you can just target that same GUID in your DEV, UAT and /or PRODUCTION databases without fear of overwriting other templates. The third column would come in handy though for selecting all system or user templates at will, without the need to seperate them into two tables (this is overkill IMHO).\nI hope that helps,\nRob G\n",
"Maybe I didn't get it, but couldn't you use GUIDs as Ids and still have user and system data together? Then you can access the system data by the (non-changable) GUIDs.\n",
"I don't think that GUID should make any problem.\nIf you want to avoid it, then use a flag:\n\nID int\ntemplate whatever\nflag enum/int/bool\n\nFlag shows whether the actual value is a system or a user value.\nIf you would like to update a system value, then ask only for system values ordered by ID, and it will show you actual order of insertion (you should have a bigint or something for ID to make sure that it doesn't get full and it doesn't get the deleted IDs back to work). With this list the x. record is the x. inserted system value.\n",
"I think there is a better third solution. \nIt strikes me that you're storing two different things in the same table and that you might be better off creating 2 separate tables one for user templates and one for system templates. You might then be able to create a view over the two tables to make them appear as a single object to your application. \nObviously I don't have full knowledge of your application and this may be impossible for you for any number of reasons but I think it's a neater solution than GUIDs and way safer than ranges of IDs (seriously don't do ID ranges it'll bite you one day)\n"
] | [
1,
1,
1,
0,
0,
0
] | [] | [] | [
"database"
] | stackoverflow_0000009702_database.txt |
Q:
Editing User Profile w/ Forms Authentication
We're using Forms Authentication in SharePoint. When the account is created, the administrator can add some information, like name and address. But the required fields are username and email address.
When a user goes to their profile page, all the fields are blank and they are unable to edit them. I have read a number of articles discussing how to import profiles from another data store, or to sync profiles. This doesn't work for us, because we don't have another data store where these profiles are stored.
Will I just have to recreate the edit profile page and build a custom profile editor? Is this information exposed via SharePoint API? I don't think directly editing the database is a good solution.
A:
If you log in to the "Shared Services administration" through the "Central Admin Tool" there is an option "Profile services policies". You can define in here what fields are user-overridable.
| Editing User Profile w/ Forms Authentication | We're using Forms Authentication in SharePoint. When the account is created, the administrator can add some information, like name and address. But the required fields are username and email address.
When a user goes to their profile page, all the fields are blank and they are unable to edit them. I have read a number of articles discussing how to import profiles from another data store, or to sync profiles. This doesn't work for us, because we don't have another data store where these profiles are stored.
Will I just have to recreate the edit profile page and build a custom profile editor? Is this information exposed via SharePoint API? I don't think directly editing the database is a good solution.
| [
"If you log in to the \"Shared Services administration\" through the \"Central Admin Tool\" there is an option \"Profile services policies\". You can define in here what fields are user-overridable.\n"
] | [
4
] | [] | [] | [
"moss",
"sharepoint",
"wss"
] | stackoverflow_0000014611_moss_sharepoint_wss.txt |
Q:
Multiple choice on WinForms
What's the best way of implementing a multiple choice option in Windows Forms? I want to enforce a single selection from a list, starting with a default value.
It seems like a ComboBox would be a good choice, but is there a way to specify a non-blank default value?
I could just set it in the code at some appropriate initialisation point, but I feel like I'm missing something.
A:
If you only want one answer from the group, then a RadioButton control would be your best fit or you could use the ComboBox if you will have a lot of options. To set a default value, just add the item to the ComboBox's collection and set the SelectedIndex or SelectedItem to that item.
Depending on how many options you are looking at, you can use a ListBox with the SelectionMode property set to MultiSimple, if it will be multiple choice or you could use the CheckBox control.
A:
You should be able to just set the ComboBox.SelectedIndex property with what you want the default value to be.
http://msdn.microsoft.com/en-us/library/system.windows.forms.combobox.selectedindex.aspx
A:
Use the ComboBox.SelectedItem or SelectedIndex property after the items have been inserted to select the default item.
You could also consider using RadioButton control to enforce selection of a single option.
A:
You can use a ComboBox with the DropDownStyle property set to DropDownList and SelectedIndex to 0 (or whatever the default item is). This will force always having an item from the list selected. If you forget to do that, the user could just type something else into the edit box part - which would be bad :)
A:
If you are giving the user a small list of choices then stick with the radio buttons. However, if you will want want to use the combo box for dynamic or long lists. Set the style to DropDownList.
private sub populateList( items as List(of UserChoices))
dim choices as UserChoices
dim defaultChoice as UserChoices
for each choice in items
cboList.items.add(choice)
'-- you could do user specific check or base it on some other
'---- setting to find the default choice here
if choice.state = _user.State or choice.state = _settings.defaultState then
defaultChoice = choice
end if
next
'-- you chould select the first one
if cboList.items.count > 0 then
cboList.SelectedItem = cboList.item(0)
end if
'-- continuation of hte default choice
cboList.SelectedItem = defaultChoice
end sub
| Multiple choice on WinForms | What's the best way of implementing a multiple choice option in Windows Forms? I want to enforce a single selection from a list, starting with a default value.
It seems like a ComboBox would be a good choice, but is there a way to specify a non-blank default value?
I could just set it in the code at some appropriate initialisation point, but I feel like I'm missing something.
| [
"If you only want one answer from the group, then a RadioButton control would be your best fit or you could use the ComboBox if you will have a lot of options. To set a default value, just add the item to the ComboBox's collection and set the SelectedIndex or SelectedItem to that item.\nDepending on how many options you are looking at, you can use a ListBox with the SelectionMode property set to MultiSimple, if it will be multiple choice or you could use the CheckBox control.\n",
"You should be able to just set the ComboBox.SelectedIndex property with what you want the default value to be.\nhttp://msdn.microsoft.com/en-us/library/system.windows.forms.combobox.selectedindex.aspx\n",
"Use the ComboBox.SelectedItem or SelectedIndex property after the items have been inserted to select the default item.\nYou could also consider using RadioButton control to enforce selection of a single option.\n",
"You can use a ComboBox with the DropDownStyle property set to DropDownList and SelectedIndex to 0 (or whatever the default item is). This will force always having an item from the list selected. If you forget to do that, the user could just type something else into the edit box part - which would be bad :)\n",
"If you are giving the user a small list of choices then stick with the radio buttons. However, if you will want want to use the combo box for dynamic or long lists. Set the style to DropDownList.\nprivate sub populateList( items as List(of UserChoices))\n dim choices as UserChoices\n dim defaultChoice as UserChoices \n\n for each choice in items\n cboList.items.add(choice)\n '-- you could do user specific check or base it on some other \n '---- setting to find the default choice here\n if choice.state = _user.State or choice.state = _settings.defaultState then \n defaultChoice = choice\n end if \n next \n '-- you chould select the first one\n if cboList.items.count > 0 then\n cboList.SelectedItem = cboList.item(0)\n end if \n\n '-- continuation of hte default choice\n cboList.SelectedItem = defaultChoice\n\nend sub\n\n"
] | [
8,
2,
2,
2,
1
] | [] | [] | [
"combobox",
"winforms"
] | stackoverflow_0000014618_combobox_winforms.txt |
Q:
Static Methods in an Interface/Abstract Class
First off, I understand the reasons why an interface or abstract class (in the .NET/C# terminology) cannot have abstract static methods. My question is then more focused on the best design solution.
What I want is a set of "helper" classes that all have their own static methods such that if I get objects A, B, and C from a third party vendor, I can have helper classes with methods such as
AHelper.RetrieveByID(string id);
AHelper.RetrieveByName(string name);
AHelper.DumpToDatabase();
Since my AHelper, BHelper, and CHelper classes will all basically have the same methods, it seems to makes sense to move these methods to an interface that these classes then derive from. However, wanting these methods to be static precludes me from having a generic interface or abstract class for all of them to derive from.
I could always make these methods non-static and then instantiate the objects first such as
AHelper a = new AHelper();
a.DumpToDatabase();
However, this code doesn't seem as intuitive to me. What are your suggestions? Should I abandon using an interface or abstract class altogether (the situation I'm in now) or can this possibly be refactored to accomplish the design I'm looking for?
A:
If I were you I would try to avoid any statics. IMHO I always ended up with some sort of synchronization issues down the road with statics. That being said you are presenting a classic example of generic programming using templates. I will adopt the template based solution of Rob Copper presented in one of the posts above.
A:
Looking at your response I am thinking along the following lines:
You could just have a static method that takes a type parameter and performs the expected logic based on the type.
You could create a virtual method in your abstract base, where you specify the SQL in the concrete class. So that contains all the common code that is required by both (e.g. exectuting the command and returning the object) while encapsulating the "specialist" bits (e.g. the SQL) in the sub classes.
I prefer the second option, although its of course down to you. If you need me to go into further detail, please let me know and I will be happy to edit/update :)
A:
For a generic solution to your example, you can do this:
public static T RetrieveByID<T>(string ID)
{
var fieldNames = getFieldNamesBasedOnType(typeof(T));
QueryResult qr = webservice.query("SELECT "+fieldNames + " FROM "
+ tyepof(T).Name
+" WHERE Id = '" + ID + "'");
return (T) qr.records[0];
}
A:
I personally would perhaps question why each of the types need to have a static method before even thinking further..
Why not create a utlity class with the static methods that they need to share? (e.g. ClassHelper.RetrieveByID(string id) or ClassHelper<ClassA>.RetrieveByID(string id)
In my experience with these sort of "roadblocks" the problem is not the limitations of the language, but the limitations of my design..
A:
How are ObjectA and AHelper related? Is AHelper.RetrieveByID() the same logic as BHelper.RetrieveByID()
If Yes, how about a Utility class based approach (class with public static methods only and no state)
static [return type] Helper.RetrieveByID(ObjectX x)
A:
You can't overload methods by varying just the return type.
You can use different names:
static AObject GetAObject(string id);
static BObject GetBObject(string id);
Or you can create a class with casting operators:
class AOrBObject
{
string id;
AOrBObject(string id) {this.id = id;}
static public AOrBObject RetrieveByID(string id)
{
return new AOrBObject(id);
}
public static AObject explicit operator(AOrBObject ab)
{
return AObjectQuery(ab.id);
}
public static BObject explicit operator(AOrBObject ab)
{
return BObjectQuery(ab.id);
}
}
Then you can call it like so:
var a = (AObject) AOrBObject.RetrieveByID(5);
var b = (BObject) AOrBObject.RetrieveByID(5);
A:
In C# 3.0, static methods can be used on interfaces as if they were a part of them by using extension methods, as with DumpToDatabase() below:
static class HelperMethods
{ //IHelper h = new HeleperA();
//h.DumpToDatabase()
public static void DumpToDatabase(this IHelper helper) { /* ... */ }
//IHelper h = a.RetrieveByID(5)
public static IHelper RetrieveByID(this ObjectA a, int id)
{
return new HelperA(a.GetByID(id));
}
//Ihelper h = b.RetrieveByID(5)
public static IHelper RetrieveByID(this ObjectB b, int id)
{
return new HelperB(b.GetById(id.ToString()));
}
}
A:
How do I post feedback on Stack Overflow? Edit my original post or post an "answer"? Anyway, I thought it might help to give an example of what is going on in AHelper.RetrieveByID() and BHelper.RetreiveByID()
Basically, both of these methods are going up against a third party webservice that returns various a generic (castable) object using a Query method that takes in a pseudo-SQL string as its only parameters.
So, AHelper.RetrieveByID(string ID) might look like
public static AObject RetrieveByID(string ID)
{
QueryResult qr = webservice.query("SELECT Id,Name FROM AObject WHERE Id = '" + ID + "'");
return (AObject)qr.records[0];
}
public static BObject RetrieveByID(string ID)
{
QueryResult qr = webservice.query("SELECT Id,Name,Company FROM BObject WHERE Id = '" + ID + "'");
return (BObject)qr.records[0];
}
Hopefully that helps. As you can see, the two methods are similar, but the query can be quite a bit different based on the different object type being returned.
Oh, and Rob, I completely agree -- this is more than likely a limitation of my design and not the language. :)
A:
Are you looking for polymorphic behavior? Then you'll want the interface and normal constructor. What is unintuitive about calling a constructor? If you don't need polymorphism (sounds like you don't use it now), then you can stick with your static methods. If these are all wrappers around a vendor component, then maybe you might try to use a factory method to create them like VendorBuilder.GetVendorThing("A") which could return an object of type IVendorWrapper.
A:
marxidad Just a quick point to note, Justin has already said that the SQL varies a lot dependant on the type, so I have worked on the basis that it could be something completely different dependant on the type, hence delegating it to the subclasses in question. Whereas your solution couples the SQL VERY tightly to the Type (i.e. it is the SQL).
rptony Good point on the possible sync issues with statics, one I failed to mention, so thank you :) Also, its Rob Cooper (not Copper) BTW ;) :D ( EDIT: Just thought I would mention that in case it wasn't a typo, I expect it is, so no problem!)
| Static Methods in an Interface/Abstract Class | First off, I understand the reasons why an interface or abstract class (in the .NET/C# terminology) cannot have abstract static methods. My question is then more focused on the best design solution.
What I want is a set of "helper" classes that all have their own static methods such that if I get objects A, B, and C from a third party vendor, I can have helper classes with methods such as
AHelper.RetrieveByID(string id);
AHelper.RetrieveByName(string name);
AHelper.DumpToDatabase();
Since my AHelper, BHelper, and CHelper classes will all basically have the same methods, it seems to makes sense to move these methods to an interface that these classes then derive from. However, wanting these methods to be static precludes me from having a generic interface or abstract class for all of them to derive from.
I could always make these methods non-static and then instantiate the objects first such as
AHelper a = new AHelper();
a.DumpToDatabase();
However, this code doesn't seem as intuitive to me. What are your suggestions? Should I abandon using an interface or abstract class altogether (the situation I'm in now) or can this possibly be refactored to accomplish the design I'm looking for?
| [
"If I were you I would try to avoid any statics. IMHO I always ended up with some sort of synchronization issues down the road with statics. That being said you are presenting a classic example of generic programming using templates. I will adopt the template based solution of Rob Copper presented in one of the posts above. \n",
"Looking at your response I am thinking along the following lines:\n\nYou could just have a static method that takes a type parameter and performs the expected logic based on the type.\nYou could create a virtual method in your abstract base, where you specify the SQL in the concrete class. So that contains all the common code that is required by both (e.g. exectuting the command and returning the object) while encapsulating the \"specialist\" bits (e.g. the SQL) in the sub classes.\n\nI prefer the second option, although its of course down to you. If you need me to go into further detail, please let me know and I will be happy to edit/update :)\n",
"For a generic solution to your example, you can do this:\npublic static T RetrieveByID<T>(string ID)\n{\n var fieldNames = getFieldNamesBasedOnType(typeof(T));\n QueryResult qr = webservice.query(\"SELECT \"+fieldNames + \" FROM \"\n + tyepof(T).Name\n +\" WHERE Id = '\" + ID + \"'\");\n return (T) qr.records[0];\n}\n\n",
"I personally would perhaps question why each of the types need to have a static method before even thinking further..\nWhy not create a utlity class with the static methods that they need to share? (e.g. ClassHelper.RetrieveByID(string id) or ClassHelper<ClassA>.RetrieveByID(string id)\nIn my experience with these sort of \"roadblocks\" the problem is not the limitations of the language, but the limitations of my design..\n",
"How are ObjectA and AHelper related? Is AHelper.RetrieveByID() the same logic as BHelper.RetrieveByID()\nIf Yes, how about a Utility class based approach (class with public static methods only and no state)\nstatic [return type] Helper.RetrieveByID(ObjectX x) \n\n",
"You can't overload methods by varying just the return type.\nYou can use different names: \nstatic AObject GetAObject(string id);\nstatic BObject GetBObject(string id);\n\nOr you can create a class with casting operators:\nclass AOrBObject\n{ \n string id;\n AOrBObject(string id) {this.id = id;}\n\n static public AOrBObject RetrieveByID(string id)\n {\n return new AOrBObject(id);\n }\n\n public static AObject explicit operator(AOrBObject ab) \n { \n return AObjectQuery(ab.id);\n }\n\n public static BObject explicit operator(AOrBObject ab)\n { \n return BObjectQuery(ab.id);\n } \n}\n\nThen you can call it like so:\n var a = (AObject) AOrBObject.RetrieveByID(5);\n var b = (BObject) AOrBObject.RetrieveByID(5); \n\n",
"In C# 3.0, static methods can be used on interfaces as if they were a part of them by using extension methods, as with DumpToDatabase() below:\nstatic class HelperMethods\n { //IHelper h = new HeleperA();\n //h.DumpToDatabase() \n public static void DumpToDatabase(this IHelper helper) { /* ... */ }\n\n //IHelper h = a.RetrieveByID(5)\n public static IHelper RetrieveByID(this ObjectA a, int id) \n { \n return new HelperA(a.GetByID(id));\n }\n\n //Ihelper h = b.RetrieveByID(5) \n public static IHelper RetrieveByID(this ObjectB b, int id)\n { \n return new HelperB(b.GetById(id.ToString())); \n }\n }\n\n",
"How do I post feedback on Stack Overflow? Edit my original post or post an \"answer\"? Anyway, I thought it might help to give an example of what is going on in AHelper.RetrieveByID() and BHelper.RetreiveByID()\nBasically, both of these methods are going up against a third party webservice that returns various a generic (castable) object using a Query method that takes in a pseudo-SQL string as its only parameters.\nSo, AHelper.RetrieveByID(string ID) might look like\n\npublic static AObject RetrieveByID(string ID)\n{\n QueryResult qr = webservice.query(\"SELECT Id,Name FROM AObject WHERE Id = '\" + ID + \"'\");\n\n return (AObject)qr.records[0];\n}\n\npublic static BObject RetrieveByID(string ID)\n{\n QueryResult qr = webservice.query(\"SELECT Id,Name,Company FROM BObject WHERE Id = '\" + ID + \"'\");\n\n return (BObject)qr.records[0];\n}\n\nHopefully that helps. As you can see, the two methods are similar, but the query can be quite a bit different based on the different object type being returned.\nOh, and Rob, I completely agree -- this is more than likely a limitation of my design and not the language. :)\n",
"Are you looking for polymorphic behavior? Then you'll want the interface and normal constructor. What is unintuitive about calling a constructor? If you don't need polymorphism (sounds like you don't use it now), then you can stick with your static methods. If these are all wrappers around a vendor component, then maybe you might try to use a factory method to create them like VendorBuilder.GetVendorThing(\"A\") which could return an object of type IVendorWrapper.\n",
"marxidad Just a quick point to note, Justin has already said that the SQL varies a lot dependant on the type, so I have worked on the basis that it could be something completely different dependant on the type, hence delegating it to the subclasses in question. Whereas your solution couples the SQL VERY tightly to the Type (i.e. it is the SQL).\nrptony Good point on the possible sync issues with statics, one I failed to mention, so thank you :) Also, its Rob Cooper (not Copper) BTW ;) :D ( EDIT: Just thought I would mention that in case it wasn't a typo, I expect it is, so no problem!)\n"
] | [
5,
3,
3,
2,
2,
2,
1,
0,
0,
0
] | [] | [] | [
".net",
"abstract_class",
"interface_design",
"static_methods"
] | stackoverflow_0000014614_.net_abstract_class_interface_design_static_methods.txt |
Q:
Is it possible to automatically make check-outs from any VCS?
Let's take a web development environment, where developers checkout a project onto their local machines, work on it, and check in changes to development.
These changes are further tested on development and moved live on a regular schedule (eg weekly, monthly, etc.).
Is it possible to have an auto-moveup of the latest tagged version (and not the latest checkin, as that might not be 100% stable), for example 8AM on Monday mornings, either using a script or a built-in feature of the VCS?
A:
Yes, it is possible. This is usually a feature provided by continuous integration tools. Typically they will get the latest source from version control, build the project, test it (running unit tests) and possibly deploy it on a (test) server.
If you don't require all those steps, you can easily do the same thing with some shell scripting or similar (i.e. checkout from version control and copy to the production folder on the server).
A:
Certainly, but the exact product may be dependent upon the VCS you are using.
What you might want to do, is have a a few different branches, and migrate up as you progress. E.g., Development -> Stable-Dev -> Beta -> Production. You can then simply auto-update to the latest version of Stable-Dev and Beta for your testers, and always be able to deploy a new Production version at the drop of a hat.
A:
Anything you can do with cvs can be done with the command line, and I am pretty sure svn is the same. Just work out the functionality you want and stick it in a shell script or a command file.
A:
The only two I have experience with are SVN and Mercurial. For Mercurial, you specify which branch you want it to update from (let's say default) and then whenever you merge a branch into default, you can just have the server run:
hg update
Which updates your repository to the latest version of the branch you set it to.
SVN is the same concept, you only check out which branch you want initially
svn co http://host/repository/branchname/
then you have your server update that with a cron job, ala
svn up
In theory though, any VCS that supports branching (all the good ones do : git, mercurial, SVN, etc...), should be able to do something similar to this.
A:
I doubt many VCSs provide this ability directly, however it should be very simple to script. Either a date or branch based checkout.
A:
As a follow up,
I'm of the opinion that an app should do one job and do it well. Often if you start combining tools into one product, none of them will shine, and most of them will be "'alright, sort-of".
If I was doing something like this, I would get myself something like SVN, ANT, and Subversion Ant Library (http://ant.apache.org/antlibs/svn/index.html) - your millage may vary though.
| Is it possible to automatically make check-outs from any VCS? | Let's take a web development environment, where developers checkout a project onto their local machines, work on it, and check in changes to development.
These changes are further tested on development and moved live on a regular schedule (eg weekly, monthly, etc.).
Is it possible to have an auto-moveup of the latest tagged version (and not the latest checkin, as that might not be 100% stable), for example 8AM on Monday mornings, either using a script or a built-in feature of the VCS?
| [
"Yes, it is possible. This is usually a feature provided by continuous integration tools. Typically they will get the latest source from version control, build the project, test it (running unit tests) and possibly deploy it on a (test) server. \nIf you don't require all those steps, you can easily do the same thing with some shell scripting or similar (i.e. checkout from version control and copy to the production folder on the server).\n",
"Certainly, but the exact product may be dependent upon the VCS you are using.\nWhat you might want to do, is have a a few different branches, and migrate up as you progress. E.g., Development -> Stable-Dev -> Beta -> Production. You can then simply auto-update to the latest version of Stable-Dev and Beta for your testers, and always be able to deploy a new Production version at the drop of a hat.\n",
"Anything you can do with cvs can be done with the command line, and I am pretty sure svn is the same. Just work out the functionality you want and stick it in a shell script or a command file.\n",
"The only two I have experience with are SVN and Mercurial. For Mercurial, you specify which branch you want it to update from (let's say default) and then whenever you merge a branch into default, you can just have the server run:\nhg update\n\nWhich updates your repository to the latest version of the branch you set it to. \nSVN is the same concept, you only check out which branch you want initially\nsvn co http://host/repository/branchname/\n\nthen you have your server update that with a cron job, ala\nsvn up\n\nIn theory though, any VCS that supports branching (all the good ones do : git, mercurial, SVN, etc...), should be able to do something similar to this.\n",
"I doubt many VCSs provide this ability directly, however it should be very simple to script. Either a date or branch based checkout.\n",
"As a follow up,\nI'm of the opinion that an app should do one job and do it well. Often if you start combining tools into one product, none of them will shine, and most of them will be \"'alright, sort-of\".\nIf I was doing something like this, I would get myself something like SVN, ANT, and Subversion Ant Library (http://ant.apache.org/antlibs/svn/index.html) - your millage may vary though.\n"
] | [
2,
1,
1,
1,
0,
0
] | [] | [] | [
"cvs",
"svn",
"version_control",
"versions"
] | stackoverflow_0000014634_cvs_svn_version_control_versions.txt |